I've been thinking about robots learning how to act in the world around them. For any task, let's presume a program could be written to get the job done. How much effort to cover the task, including all the corner cases? Most solid software needs a lot of effort. The devil's in the details.
However, the details are all around us. Why use automated learning? If a strategy doesn't work, modify it. Automate the modification. This glosses over lots of the how question, and bootstrapping some answers into the system might speed things along. But why work out all the bugs for the system if the system can work out the bugs for itself?
I think the same issue can apply to many types of software, by the way.
The ability to sense the effects of actions is important in all this, too.
It seems that "the ability to sense the effects of actions" is the core of cybernetics. Almost seems odd to describe a whole discipline around that, but I feel glad that I think I understand the term by now.
ReplyDelete