A Cognitive Control Architecture for the Perception-Action Cycle in Robots and Agents

Vassilis Cutsuridis*, John G. Taylor

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

We show aspects of brain processing on how visual perception, recognition, attention, cognitive control, value attribution, decision-making, affordances and action can be melded together in a coherent manner in a cognitive control architecture of the perception–action cycle for visually guided reaching and grasping of objects by a robot or an agent. The work is based on the notion that separate visuomotor channels are activated in parallel by specific visual inputs and are continuously modulated by attention and reward, which control a robot’s/agent’s action repertoire. The suggested visual apparatus allows the robot/agent to recognize both the object’s shape and location, extract affordances and formulate motor plans for reaching and grasping. A focus-of-attention signal plays an instrumental role in selecting the correct object in its corresponding location as well as selects the most appropriate arm reaching and hand grasping configuration from a list of other configurations based on the success of previous experiences. The cognitive control architecture consists of a number of neurocomputational mechanisms heavily supported by experimental brain evidence: spatial saliency, object selectivity, invariance to object transformations, focus of attention, resonance, motor priming, spatial-to-joint direction transformation and volitional scaling of movement.
Original languageEnglish
Article number3
Pages (from-to)383-395
Number of pages13
JournalCognitive Computation
Volume5
Issue number3
DOIs
Publication statusPublished - 11 Apr 2013
Externally publishedYes

Fingerprint

Dive into the research topics of 'A Cognitive Control Architecture for the Perception-Action Cycle in Robots and Agents'. Together they form a unique fingerprint.

Cite this