This is the plugin demo in action..
..while recognizing Dutch π³π± | .. after recognizing American-English πΊπΈ |
---|---|
![]() |
![]() |
From the command prompt go to your app's root folder and execute:
tns plugin add nativescript-speech-recognition
Depending on the OS version a speech engine may not be available.
// require the plugin
var SpeechRecognition = require("nativescript-speech-recognition").SpeechRecognition;
// instantiate the plugin
var speechRecognition = new SpeechRecognition();
speechRecognition.available().then(
function(available) {
console.log(available ? "YES!" : "NO");
}
);
// import the plugin
import { SpeechRecognition } from "nativescript-speech-recognition";
// instantiate the plugin (assuming the code below is inside a Class)
private speechRecognition = new SpeechRecognition();
public checkAvailability(): void {
this.speechRecognition.available().then(
(available: boolean) => console.log(available ? "YES!" : "NO"),
(err: string) => console.log(err)
);
}
On iOS this will trigger two prompts:
The first prompt requests to allow Apple to analyze the voice input. The user will see a consent screen which you can extend with your own message by adding a fragment like this to app/App_Resources/iOS/Info.plist
:
<key>NSSpeechRecognitionUsageDescription</key>
<string>My custom recognition usage description. Overriding the default empty one in the plugin.</string>
The second prompt requests access to the microphone:
<key>NSMicrophoneUsageDescription</key>
<string>My custom microphone usage description. Overriding the default empty one in the plugin.</string>
// import the options
import { SpeechRecognitionTranscription } from "nativescript-speech-recognition";
this.speechRecognition.startListening(
{
// optional, uses the device locale by default
locale: "en-US",
// this callback will be invoked repeatedly during recognition
onResult: (transcription: SpeechRecognitionTranscription) => {
console.log(`User said: ${transcription.text}`);
console.log(`User finished?: ${transcription.finished}`);
},
}
).then(
(started: boolean) => { console.log(`started listening`) },
(errorMessage: string) => { console.log(`Error: ${errorMessage}`); }
);
this.speechRecognition.stopListening().then(
() => { console.log(`stopped listening`) },
(errorMessage: string) => { console.log(`Stop error: ${errorMessage}`); }
);
Check out this tutorial (YouTube) to learn how to use this plugin in a NativeScript-Angular app.