Last reviewed: 12/15/2024 8:39:58 AM

Swift Applications

Develop iOS and Mac applications that speak and listen using Swift with Xcode.

The following sections describe the steps for integrating SpeechKit with Swift applications.

SpeechKit Swift Classes

SpeechKit includes Swift classes for developing applications that speak and listen in an XCFramework. The XCFramework enables simulated testing and device deployments.

To install the SpeechKit.xcframework, copy the SpeechKit.xcframework folder to your Mac development platform.

To access the SpeechKit classes within your application, drag or add the SpeechKit.xcframework to your application project under General settings.

Add the Speech.framework for speech recognition and/or add the AVFoundation.framework for speech synthesis.

Import the SpeechKit classes into your application:


#import "SpeechKit"

Object Instantiation

Instantiate SpeechKit and set the credentials. For speech recognition, instantiate a recognizer object, set delegate for events, and define recognition event handlers. For speech synthesis, instantiate a synthesizer object, optionally set delegate for events and define synthesis callback event handlers.


_SpeechKit = SPSpeechKit()
if (_SpeechKit != nil)
{
    // Set credentials
    _ = _SpeechKit!.setCredentials(credentials: "Credentials")
    _Recognizer = _SpeechKit!.createChantRecognizer()
    if (_Recognizer != nil)
    {
        _Recognizer!.delegate = self
    }
    _Synthesizer = _SpeechKit!.createChantSynthesizer()
    if (_Synthesizer != nil)
    {
        _Synthesizer!.delegate = self
    }
}

Event Callbacks

Event callbacks are the mechanism in which the class object sends information back to the application such as speech recognition occurred, audio playback finished, or there was an error.

For speech recognition, add the delegate protocol SPChantRecognizerDelegate to your class declaration


class ViewController: UIViewController, UIPickerViewDataSource, UIPickerViewDelegate, SPChantRecognizerDelegate {

Add the delegate protocol methods to your class. Even if you do not handle the event, the protocol method is required in Swift.


func apiError(sender: SPChantRecognizer, args: SPChantAPIErrorEventArgs)
{
        
}
...
func recognitionDictation(sender: SPChantRecognizer, args: SPRecognitionDictationEventArgs)
{
    let newText = String(format: "%@%@", self.textView1.text, args.text)
    self.textView1.text = newText
}
...

For speech synthesis, add the delegate protocol SPChantSynthesizerDelegate to your class declaration


class ViewController: UIViewController, UIPickerViewDataSource, UIPickerViewDelegate, SPChantSynthesizerrDelegate {

Add the delegate protocol methods to your class. Even if you do not handle the event, the protocol method is required in Swift.

func apiError(sender: SPChantSynthesizer, args: SPChantAPIErrorEventArgs)
{
        
}
...
func audioDestStop(sender: SPChantSynthesizer, args: SPAudioEventArgs)
{
    self.button1.isEnabled = true
}
...
func rangeStart(sender: SPChantSynthesizer, args: SPRangeStartEventArgs)
{
    // Cast to the specific Speech API object to access API-specific properties
    let avfargs: SPAVFRangeStartEventArgs = args as! SPAVFRangeStartEventArgs
    self.textView1.selectedRange = (NSRange(location: avfargs.location, length: avfargs.length))
}
...

Permissions

Speech recognition requires the user to grant speech recognition permission and access to the microphone. The app's Info.plist must contain an NSSpeechRecognitionUsageDescription key with string value and an NSMicrophoneUsageDescription key with string value.

Select the Info.plist file in the project and add two keys with usage description text:

  • Add a key to the Information Property List by clicking the plus button.
  • Select: Privacy - Speech Recognition Usage Description.
  • Enter a string value description such as: speech recognition.
  • Add a key to the Information Property list by clicking the plus button.
  • Select: Privacy - Microphone Usage Description.
  • Enter a string value description such as: mic for speech recognition.

macOS applications require an additional project setting to enable audio input:

  • select the project Signing & Capabilities tab; select the Audio Input checkbox under App Sandbox/Hardware and under Hardware Runtime/Resource Access; or
  • add com.apple.security.device.audio-input and com.apple.security.device.usb keys with Yes (true) values.

Development and Deployment Checklist

When developing and deploying Swift applications, ensure you have a valid license. Review the following checklist before developing and deploying your applications:

  • Develop and deploy Swift applications to any system with a valid license from Chant. See the section License for more information about licensing Chant software.
  • Build and link with the SpeechKit.xcframework, Speech.framework for speech recognition, and AVFoundation.framework for speech synthesis.

Sample Projects

Swift sample projects are installed at the following location:

  • Chant\SpeechKit 14\iOS\Swift and
  • Chant\SpeechKit 14\macOS\Swift.