Do More with Microsoft Speech and Natural User Interface Technology

Chant software and services enables you to do more with Microsoft speech and natural user interface technology. Start talking with your applications using Microsoft recognizers and synthesizers, and tracking movement with Kinect sensors in a matter of minutes. It's that easy.

Or, if you prefer, have Chant provide you the code to drop into your applications and start interacting with them with little or no effort on your part at all.

Chant application-ready class libraries simplify using the Azure Speech, Microsoft SAPI5, Microsoft Speech Platform, .NET Speech, WindowsMedia Speech (UWP and WinRT), and Natural User Interface API (NAPI) SDKs.

In addition to supporting the features provided by these SDKs, your applications can also take advantage of Chant features not implemented in them such as:

  • Use a common object framework to manage resources across different SDKs and runtimes.
  • Manage context-based and context-free recognition by dynamically adding, removing, enabling, and disabling, command, grammar, and dictation vocabularies.
  • Select and adjust recognizer, synthesizer, and audio options and property settings dynamically.
  • Track movement with simple start and stop controls.
  • Integrate with almost any application type, architecture, programming language, and development environment used for Windows platforms.

Let us know how we can help you do more with Microsoft speech and Natural User Interface technology.

Contact Chant

Create Grammars for High-performance Recognition with GrammarKit

Design grammars for high-performance speech recognition with GrammarKit

Now you can easily compile and persist SAPI5 and W3C XML grammar binary within your applications using GrammarKit from Chant. Your applications can:

  • Compile grammar source from file and string formats.
  • Persist compiled grammar binary to a file.

With GrammarKit in the Chant Developer Workbench, you can:

  • Create and edit grammars in native grammar syntax.
  • Compile and debug grammars.
  • Test grammars with live and recorded audio (requires SpeechKit).

Learn more about GrammarKit »

Build Humanlike Interfaces for Natural User Experience with KinesicsKit

Build humanlike interfaces for natural user experience with KinesicsKit

Now you can easily build humanlike interfaces for natural user experience in your applications using KinesicsKit from Chant. Your Natural User Interface API (NAPI) applications can:

  • Capture and map color, depth, and skeleton data.
  • Record and playback audio files.
  • Integrate movement tracking with speech technology.
  • Enumerate and control Microsoft Kinect sensors.
  • Implement in your favorite programming language: C++, C++Builder, Delphi, Java, and .NET Framework (C# and VB).

With KinesicsKit in the Chant Developer Workbench, you can:

  • Enumerate Microsoft Kinect sensors.
  • Render color, depth, and skeleton data.
  • Trace sensor events.
  • Test application sensor management functions.

Learn more about KinesicsKit »

Tailor Pronunciations for Maximum Clarity with LexiconKit

Tailor pronunciations for maximum clarity with LexiconKit

Now you can easily generate and tailor word pronunciations directly from within your applications using LexiconKit from Chant. Your applications can:

  • Create, edit, and speak word pronunciations on demand.

With LexiconKit in the Chant Developer Workbench, you can:

  • Create and edit W3C lexicons (.pls).
  • Generate word pronunciation phonemes.
  • Edit word pronunciation phonemes.
  • Speak word pronunciation phonemes.

Administer Speaker Profiles for Accurate Recognition with ProfileKit

Administer speaker profiles for accurate speech recognition with ProfileKit

Now you can easily administer speaker profiles within your applications using ProfileKit from Chant. Your applications can:

  • Create and delete speaker profiles on demand.
  • Integrate speaker training as part of your application features for ensuring maximum recognition accuracy and reliability.

With ProfileKit in the Chant Developer Workbench, you can:

  • Create and delete speaker profiles.
  • Enumerate speaker profiles for selection and command line testing.
  • Invoke speaker training.

Learn more about ProfileKit »

Integrate Recognition and Synthesis Faster with SpeechKit

Integrate speech recognition and synthesis faster with SpeechKit

You can easily manage recognizers within your applications using SpeechKit from Chant. Your applications can:

  • Capture spoken input as if it was typed using a keyboard.
  • Select menus, list items, click buttons, and click hypertext links by speaking instead of using a mouse.
  • Simulate keyboard input and mouse clicks.
  • Recognize spoken languages supported by recognizers.
  • Access detailed recognition result attributes and properties.
  • Manage context-based and context-free recognition by dynamically adding, removing, enabling, and disabling, command, grammar, and dictation vocabularies.
  • Take advantage of built-in audio management and recognize from the audio format needed by your application.
  • Recognize from files and microphone (live) audio sources.

You can easily manage synthesizers within your applications using SpeechKit from Chant. Your applications can:

  • Synthesize speech from anywhere within your application.
  • Easily enumerate and select a voice.
  • Easily adjust the spoken output speed and volume.
  • Take advantage of built-in audio management and synthesize to the audio format needed by your application.
  • Synthesize text from strings and files.
  • Access detailed synthesis result attributes and properties.
  • Process requests synchronously or asynchronously.

With SpeechKit in the Chant Developer Workbench, you can:

  • Enumerate recognizers for selection and testing of recognizer features.
  • Support grammar activation and testing (requires GrammarKit).
  • Enumerate synthesizers for selection and testing of synthesizer features.
  • Trace recognition and synthesis events.
  • Support SSML playback (requires VoiceMarkupKit).

Learn more about SpeechKit »

Fine-tune Speech Synthesis with VoiceMarkupKit

Fine-tune Speech Synthesis with VoiceMarkupKit

Now you can easily markup text-to-speech within your applications using VoiceMarkupKit from Chant. Your applications can:

  • Generate markup language in Azure Speech SSML, SAPI5 XML, and W3C SSML syntax.
  • Generate pronunciation phonemes for Azure Speech, SAPI5 compatible, and Micosoft Speech Platform voices.
  • Dynamically switch among speech APIs and syntax formats.

With VoiceMarkupKit in the Chant Developer Workbench, you can:

  • Create and edit documents with Azure Speech SSML, SAPI5 XML, and W3C SSML.
  • Generate Azure Speech SSML, SAPI5 XML, and W3C SSML.
  • Generate word pronunciation phonemes.
  • Edit word pronunciation phonemes (requires LexiconKit).
  • Playback text with Azure Speech SSML, SAPI5 XML, and W3C SSML (requires SpeechKit).

Learn more about VoiceMarkupKit »

Win Over Your Audience with Enriched Communications using VoiceXMLKit

Fine-tune Speech Synthesis with VoiceXMLKit

Now you can develop and test VoiceXML applications offline before deploying to servers with VoiceXMLKit from Chant. Your applications can:

  • Create, edit, validate, and test VoiceXML.
  • Dynamically generate VoiceVXML.
  • Develop VoiceXML applications with your favorite programming language: C++Builder, C++, Delphi, Java, and .NET Framework (C# and VB).

With VoiceXMLKit in the Chant Developer Workbench, you can:

  • Create and edit VoiceXML documents.
  • Test VoiceXML documents with microphone audio and keypad data
  • Trace runtime events.

Learn more about VoiceXMLKit »

Microsoft Partner logo