Last reviewed: 12/15/2024 8:39:02 AM

Maui Applications

Develop Maui cross platform applications that speak and listen in C# that target Android, iOS, and Windows platforms.

The following sections describe the steps for integrating SpeechKit with Maui applications.

SpeechKit Assemblies

SpeechKit includes Maui .NET assemblies for .NET 8 and 9.

To access SpeechKit classes within a Maui application, add project references with the applicable nuget package:

  • Select the application project in the Solution Explorer
  • Right-click the mouse and select the Manage Nuget Packages… menu item.
  • Enter Chant in the search bar.
  • Select the Chant.SpeechKit.Maui package and press the Install button.
  • Chant SpeechKit Maui package

To access the SpeechKit classes within Maui applications, add references to the Chant shared and SpeechKit assemblies.


using Chant.SpeechKit.Maui;
using Chant.Shared.Maui;

Object Instantiation

Declare a global variable for the SpeechKit class, instantiate an instance, add the event handler(s), and set the credentials.


private NSpeechKit? _SpeechKit = null;
// Cross platform default recognizer
private NChantRecognizer? _Recognizer = null;
// Cross platform default synthesizer
private NChantSynthesizer? _Synthesizer = null;

public MainWindow()
{
    InitializeComponent();
    // Instantiate SpeechKit
    _SpeechKit = new NSpeechKit();
    if (_SpeechKit != null)
    {
        // Set credentials
        _SpeechKit.SetCredentials("Credentials");
        // Create cross platform default recognizer
        _Recognizer = _SpeechKit.CreateChantRecognizer();
        if (_Recognizer != null)
        {
            _Recognizer.InitComplete += Recognizer_InitComplete;
            _Recognizer.RecognitionCommand += Recognizer_RecognitionCommand;
        }
        // Create cross platform default synthesizer
        _Synthesizer = _SpeechKit.CreateChantSynthesizer();
        if (_Synthesizer != null)
        {
            _Synthesizer.InitComplete += Synthesizer_InitComplete;
            _Synthesizer.RangeStart += Synthesizer_RangeStart;
            _Synthesizer.WordPosition += Synthesizer_WordPosition;
        }
    }
}

Event Callbacks

Event callbacks are the mechanism in which the class object sends information back to the application such as speech recognition occurred, audio playback finished, or there was an error.


private void Recognizer_InitComplete(object? sender, SREventArgs args)
{
    // Android only
    // Safe to enumerate recognizers and start recognition
    ...
}
private void Recognizer_RecognitionCommand(object? sender, RecognitionCommandEventArgs args)
{
    if ((args != null) && (args.Phrase != null))
    {
        int commandProp = 0;
        int listProp = 0;
        if (args is AndroidRecognitionCommandEventArgs)
        {
            // Downcast for Android-specific properties
            AndroidRecognitionCommandEventArgs? androidMediaArgs = args as AndroidRecognitionCommandEventArgs;
            if ((androidMediaArgs != null) && (androidMediaArgs.Semantics != null))
            {
                ...
            }
        }
        if (args is SFRecognitionCommandEventArgs)
        {
            // Downcast for iOS-specific properties
            SFRecognitionCommandEventArgs? sfArgs = args as SFRecognitionCommandEventArgs;
            if ((sfArgs != null) && (sfArgs.Semantics != null))
            {
                ...
            }
        }
        if (args is WindowsMediaRecognitionCommandEventArgs)
        {
            // Downcast for Windows-specific properties
            WindowsMediaRecognitionCommandEventArgs? windowsMediaArgs = args as WindowsMediaRecognitionCommandEventArgs;
            if ((windowsMediaArgs != null) && (windowsMediaArgs.Semantics != null))
            {
                ...
            }
        }
        ...
    }
}
private void Synthesizer_InitComplete(object? sender, TTSEventArgs args)
{
    // Android only
    // Safe to enumerate voices and speak
}
private void Synthesizer_RangeStart(object? sender, RangeStartEventArgs args)
{
    int start = 0;
    int length = 0;
    if (args != null)
    {
        if (DeviceInfo.Current.Platform == DevicePlatform.Android)
        {
            // Downcast for Android-specific properties
            AndroidRangeStartEventArgs androidargs = (AndroidRangeStartEventArgs)args;
            if (androidargs != null)
            {
                ...
            }
        }
        if (DeviceInfo.Current.Platform == DevicePlatform.iOS)
        {
            // Downcast for iOS-specific properties
            AVFRangeStartEventArgs avfargs = (AVFRangeStartEventArgs)args;
            if (avfargs != null)
            {
                ...
            }
        }
        ...
    }
}
private void Synthesizer_WordPosition(object sender, WordPositionEventArgs args)
{
    // Windows only event does not need downcast
    if (args != null)
    {
        int startPosition = args.Position;
        int wordLength = args.Length;
        ...
    }
}

Android Permissions

Speech recognition on Android requires the user to grant RECORD_AUDIO permission. Add the appropriate permission in the manifest file and invoke permission inquiry in the application.

<uses-permission android:name="android.permission.RECORD_AUDIO" />

Speech synthesis streamed to a file requires WRITE_EXTERNAL_STORAGE permission. Add the appropriate permission in the manifest file and invoke permission inquiry in the application.

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

Apps Targeting Android 11

Speech recognition apps targeting Android 11 with API 30 require an additional manifest entry. Add the following queries section before the application element.

<queries>
    <intent>
        <action android:name="android.speech.RecognitionService" />
    </intent>
</queries>

Speech synthesis apps targeting Android 11 with API 30 require an additional manifest entry. Add the following queries section before the application element.

<queries>
    <intent>
        <action android:name="android.intent.action.TTS_SERVICE" />
    </intent>
</queries>

iOS Permissions

Speech recognition on iOS requires the user to grant speech recognition permission and access to the microphone. The app's Info.plist must contain an NSSpeechRecognitionUsageDescription key with string value and an NSMicrophoneUsageDescription key with string value.

Select the Info.plist file in the project and add two keys with usage description text:

    <key>NSMicrophoneUsageDescription</key>
    <string>mic for speech recognition</string>
    <key>NSSpeechRecognitionUsageDescription</key>
    <string>speech recognition</string>

Development and Deployment Checklist

When developing and deploying .NET applications, ensure you have a valid license, bundle the correct Chant class libraries, and configure your installation properly on the target system.

Review the following checklist before deploying Maui applications:

  • Develop and deploy .NET applications to any system with a valid license from Chant. See the section License for more information about licensing Chant software.
  • Merge Chant.SpeechKit.Maui.dll, Chant.Shared.Maui.dll, Chant.SpeechKit.Androidnetx.Binding.dll, and/or Chant.SpeechKit.iOSnetx.Binding.dll assemblies with your application if using an obfuscator like .NET Reactor by Eziriz.

Sample Projects

Maui sample projects are installed at the following location:

  • Documents\Chant\SpeechKit 14\Maui.