How can I implement a console application that speaks and listens?
Last reviewed: 3/19/2008
HOW Article ID: H030801
The information in this article applies to:
- SpeechKit 5, 6
Summary
Console applications do not have windows and are unable to use SAPI. To develop voice-enabled console or service applications, you need to use native speech APIs from vendors such as Cepstral and Nuance.
Within SpeechKit 5, you can select native engine APIs for Nuance Vocon 3200 speech recognizer, Cepstral speech synthesizers, and Nuance RealSpeak Solo speech synthesizers that do not require SAPI or an application window to run.
More Information
Two new samples have been added to the SpeechKit 5 CDLL directory: ConsoleSR and ConsoleTTS. These samples illustrate using speech recognition and speech synthesis from within a console application.
The new ConsoleSR sample illustrates using Nuance VoCon 3200 to recognize commands using a command vocabulary. A command vocabulary and command resource was used to illustrate the dynamic definition capabilities available with the SpeechKit vocabulary management feature. However, grammar vocabularies can be used too.
The VoCon 3200 SDK must be installed before running the sample program. Update the CSPEnginePath and CSPAcousticModel properties to values applicable for the VoCon SDK installed before compiling and running the sample program.
The new ConsoleTTS sample illustrates using Cepstral Swift and Nuance RealSpeak Solo voices via their native APIs. Though the sample does not take advantage of callback events, they are supported via callback like the ConsoleSR sample.
Either a Cepstral or Nuance RealSpeak Solo voice must be installed before running the sample program. Update the CSPEnginePath property to a value applicable for the voice installed before compiling and running the sample program.