Up:[[Tutorial]]

#contents

* Objective: [#jac7a1ad]
Using voice conversation to make an agent in SIGverse can learn a complex action based on a limited set of simple action that defined before

* Demostration: [#wc84edf2]
- basic implementation: http://web.iir.nii.ac.jp/sigverse/web/pov104clean/
- admin page: http://web.iir.nii.ac.jp/sigverse/web/pov104clean/admin
~ username:
~ password:
~ username: admin
~ password: khoand123
* System overview [#c5f13343]
The system contain 3 parts:~
- Speech recognition module: Convert speech signal into text
- Controller (Agent): Communicate with human outside of SIGVerse by [[send Message API:http://www.sociointelligenesis.org/SIGVerse/index.php?Speech_Recognition_using_Microsoft_SAPI#b49c4788]] and perform actions inside SIGVerse
- Dialogue engine (AIML-chatbots system): perform learning procedure and store the knowlegde in database~
&ref(sys_overview.png);~
Currently, human must actively ask agent for something by voice through ASR module, then the converted text will be redirected to dialogue engine (DE) module. DE module search for the best match in AIML database to produce response and send it back to controller. Response can contain action information in special format. Controller extracts information of action, perform these action and return answer text to human.~


* Speech recognition module [#d121a620]

* Action learning module [#z1aa8f48]

Front page   Edit Diff Backup Upload Copy Rename Reload   New List of pages Search Recent changes   Help   RSS of recent changes