Until a fateful combination of machine learning, quantum computing, and 3D printing spawns tyrannical artificial lifeforms to rule over all mankind, we need to settle for semi-intelligent devices that we program by hand. An important piece of that puzzle is language processing. In this recipe, we will use the OpenEars library to have our iOS device speak and recognize some basic English dialogue.
Due to the size of the libraries required to use OpenEars, this recipe has its own project. Please refer to the project Ch6_SpeechRecognition for full working code of this recipe.
The OpenEars installation process is complex. Among other things it requires the configuration of four other libraries: flite, pocketsphinx, sphinxbase, and wince. The OpenEars library is itself an embedded XCode project that is statically linked to your project.
It is recommended that you take a look at the Ch6_SpeechRecognition
project first. From there...