Tuesday, October 6, 2009

Parameterized Subtasks

I've been studying logical planning and hierarchical reinforcement learning. It's common to see parameterized subtasks. For example, "grab the apple" vs. "grab the box". Those could be seen as function call "grab(apple)" and grab(box)".

Some of the logical systems manage to deal with parameterized tasks really as parameters, using some strategy for plugging in possible argument values. But not all. It's easier to look through the options if you just instantiate every possible case (or at least the relevant cases???). So instead of the general "grab(x)", you deal with "grab(apple)" and "grab(box)" as individual cases.

All of the reinforcement learning algorithms I've seen so far deal with it this way (instantiating each individual case). But it's hard to read every paper. Maybe I just haven't seen the right papers yet. Anyway, depending on the representation of the arguments, it seems like you lose a lot of ability to learn general concepts by learning each of the instantiations from scratch.

I also wonder about genetic programming. Do they generate parameterized functions there? Or do they usually just generate lower-level constructs?

1 comment:

  1. A Koza paper from 2000, called "Evolution of a Controller with a Free Variable using Genetic Programming", shows using a fitness function that tests a variety of values in a simulator for the free variable, and it took them about 3 * 10^16 cycles of computer time (24 hours in a cluster) to find their solution after 42 generations.

    ReplyDelete