Hierarchical Multiagent Learning from Demonstration

dc.contributor.advisorLuke, Sean
dc.contributor.authorSullivan, Keith
dc.creatorSullivan, Keith
dc.date.accessioned2015-07-29T18:42:48Z
dc.date.available2015-07-29T18:42:48Z
dc.date.issued2015
dc.description.abstractDeveloping agent behaviors is often a tedious, time-consuming task consisting of repeated code, test, and debug cycles. Despite the difficulties, complex agent behaviors have been developed, but they required significant programming ability. An alternative approach is to have a human train the agents, a process called learning from demonstration. This thesis develops a learning from demonstration system called Hierarchical Training of Agent Behaviors (HiTAB) which allows rapid training of complex agent behaviors. HiTAB manually decomposes complex behaviors into small, easier to train pieces, and then reassembles the pieces in a hierarchy to form the final complex behavior. This decomposition shrinks the learning space, allowing rapid training. I used the HiTAB system to train George Mason University's humanoid robot soccer team at the competition which marked the first time a team used machine learning techniques at the competition venue. Based on this initial work, we created several algorithms to automatically correct demonstrator error.
dc.format.extent188 pages
dc.identifier.urihttps://hdl.handle.net/1920/9689
dc.language.isoen
dc.rightsCopyright 2015 Keith Sullivan
dc.subjectComputer science
dc.subjectArtificial Intelligence
dc.subjectMachine learning
dc.subjectMultiagent Systems
dc.subjectRobotics
dc.titleHierarchical Multiagent Learning from Demonstration
dc.typeDissertation
thesis.degree.disciplineComputer Science
thesis.degree.grantorGeorge Mason University
thesis.degree.levelDoctoral

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Sullivan_gmu_0883E_10792.pdf
Size:
6.5 MB
Format:
Adobe Portable Document Format