Limitations of autonomous, cognitive agents

by Peter Small


Stigmergic systems self-organize and become increasingly efficient because of a positive feedback effect between individuals and an environment. Individuals change the environment and these changes change the way individuals change the environment. It is an endless loop of changes and reactions that bring about the increasing order and efficiency (see references to dynamic complex systems for further explanation of this process).

Where the environment is a Web site and the individuals are people, the changes to the Web site require an agent to act as an intermediary. These agents can be very simple or very complex depending upon the purpose of the Web site.

It is very tempting to imagine that you can use autonomous, cognitive agents here to replace human involvement. This is a mistake. Agents are used to enhance human involvement, not to replace it. Once you start to give control to agents, the system loses power because you are limiting the role of the most powerful computing device known to man: the human brain. The idea is to create a system that makes maximum use of human knowledge and initiative.

The inclusion of humans into the feedback loop of a stigmergic system will not prevent the stigmergic effect from taking place. Stigmergy affects human behavior just as surely as it affects an ant's or a robot's behavior. And on the Web, it is human behavior you want to influence, not an agent's.

In stigmergic systems, agents are used simply as tools, which give users an improved ability to interact with a system. Certainly you can design agents that use neural nets and genetic algorithms to recognize patterns or make decisions. Certainly you can arrange for agents to communicate and cooperate with each other. But, at the end of the day this doesn't amount to any real intelligence. This has to come through human involvement.

It is quite unrealistic to imagine that autonomous, cognitive agents can play any intelligent part in stigmergic systems. Years of fruitless effort with AI have proved this point beyond doubt - and even the vast improvements in computing power and increased knowledge about the functioning of the human brain does not make any difference.

To understand why this is so, it is worth reading through two articles written by Yehouda Harpaz: Reasoning Errors In The Current Models of Human Cognition and his criticism of Stuart Kauffman's book 'At home in the universe'. You may not be in agreement with all he writes, but it will certainly put the application of cognitive, autonomous agents into perspective.