eddyverbruggen / nativescript-speech-recognition Goto Github PK
View Code? Open in Web Editor NEW:speech_balloon: Speech to text, using the awesome engines readily available on the device.
License: Other
:speech_balloon: Speech to text, using the awesome engines readily available on the device.
License: Other
I have this error: 3
after this.speechRecognition.startListening()
.
Hard to debug...
Instead of always deferring the permission popups to the moment the recording kicks in we can add a way to ask permissions first.
TIL NSLocale.currentLocale
does not actually return the locale of the device.
Hi
when i'm using the code like this
this.speechRecognition.startListening({
locale: "en-US",
onResult: (transcription: SpeechRecognitionTranscription) => {
console.log(JSON.stringify(transcription));
console.log(`User Finihed?: ${transcription.finished}`);
},
}
transcription doesn't have a text property when i'm printing it
JS: {"finished":true}
so after looking around i found the problem in node_modules in speech-recognition.android.js
options.onResult({ text: transcripts[0],
i replaced it with
text: transcripts.get(0)
and it works just find
Otherwise your app won't be able to play back any audio.
The option "locale" (for select speek language) is not implemented in the plugin.
I just added this in speech-recognition.android.js to make it works :
intent.putExtra(android.speech.RecognizerIntent.EXTRA_LANGUAGE, "en");
replaced by
intent.putExtra(android.speech.RecognizerIntent.EXTRA_LANGUAGE, options.locale);
Could you please fix it in the repo ? Thanks
Making this change backward compatible, so no major version bump required.
How to listen continuously and get the words as the person speaks ?
CONSOLE LOG: TypeError: Cannot read property 'osVersion' of undefined
When checking if the speech is available.
iOS
NativeScript 7
"sendBackResults" does not distinguish whether it is being invoked by "onResults" or by "onPartialResults", and thus always flags "finished: true" within the returned "SpeechRecognitionTranscription" object. A parameter could be added to "sendBackResults", to indicate whether the results being returned are partial or final.
In iOS transcription.finished is not been set to false when the user gives a pause in the sentence.
Click on Mic captures the text => Click on Mic to Stop Listening => transcription.finished is true and transcription.text is perfectly working.
Issue on continuously listening:
Click on Mic its listening => when user pause - speech-recognition should recognize that user has stopped and expected it should return transcription.finished = true.
But in iOS its not recognizing that user has stopped speaking.
Is there any work around for this issue.
Hi,
I am very new to nativescript. I wanted to try the demo before using this plugin but encountered "cycle link found" error as below. For reference, I tested on two machines (mac and ubuntu 16.04) but got the same error. Also, I am using latest node 8.x.
`npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN demo No description
npm WARN demo No repository field.
npm WARN demo No license field.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
added 221 packages in 4.074s
Copying template files...
Installing tns-android
◝ Installing tns-android[..................] / rollbackFailedOptional: verb np ◞ Installing tns-android[..................] / rollbackFailedOptional: verb np ◡ Installing tns-android[..................] / rollbackFailedOptional: verb np ◟ Installing tns-android[..................] / rollbackFailedOptional: verb np ◜ Installing tns-android[..................] / rollbackFailedOptional: verb np ◠ Installing tns-android[..................] / rollbackFailedOptional: verb np ◝ Installing tns-android[..................] / rollbackFailedOptional: verb np ◞ Installing tns-android[..................] / rollbackFailedOptional: verb np ◡ Installing tns-android[..................] - loadIdealTree:loadAllDepsIntoId ◜ Installing tns-android[ ...............] | loadDep:negotiator: sill resolv ◠ Installing tns-android[ ............] | loadExtraneous: sill resolveWit ◝ Installing tns-android[ ............] \ diffTrees: sill install generat ◞ Installing tns-android[ ...........] - extract:tns-android: verb lock ◡ Installing tns-android[ .........] \ extract:nan: sill pacote nan@ht ◟ Installing tns-android[ .........] \ extract:nan: sill pacote nan@ht ◜ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◠ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◝ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◞ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◡ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◟ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◜ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◠ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◝ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◞ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◡ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◟ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◜ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◠ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◝ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◞ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◡ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◟ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◜ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◠ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◝ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◞ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◡ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◟ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◜ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◠ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◝ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◞ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◡ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◟ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◜ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◠ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◝ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◞ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◡ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◟ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◜ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◠ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◝ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◞ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◡ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◟ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◜ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◠ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◝ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◞ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◡ Installing tns-android[ ........] | extract:fsevents: sill pacote f ◟ Installing tns-android[ ......] \ finalize:tns-android: sill doSe ◜ Installing tns-android[ ......] | refresh-package-json:assert-plu ◠ Installing tns-android[ ......] / postinstall:tns-android: info l ◝ Installing tns-android[ ......] - postinstall: info lifecycle tns ◞ Installing tns-android[ ......] | postinstall: info lifecycle tns+ [email protected]
added 1 package in 4.75s
Project successfully created.
Executing before-prepare hook from /home/side/Desktop/Workspace/nativescript-speech-recognition/demo/hooks/before-prepare/nativescript-dev-typescript.js
Found peer TypeScript 2.3.4
5 private recognitionRequest: SFSpeechAudioBufferRecognitionRequest = null;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(5,31): error TS2304: Cannot find name 'SFSpeechAudioBufferRecognitionRequest'.
6 private audioEngine: AVAudioEngine = null;
~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(6,24): error TS2304: Cannot find name 'AVAudioEngine'.
7 private speechRecognizer: SFSpeechRecognizer = null;
~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(7,29): error TS2304: Cannot find name 'SFSpeechRecognizer'.
8 private recognitionTask: SFSpeechRecognitionTask = null;
~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(8,28): error TS2304: Cannot find name 'SFSpeechRecognitionTask'.
9 private inputNode: AVAudioInputNode = null;
~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(9,22): error TS2304: Cannot find name 'AVAudioInputNode'.
10 private audioSession: AVAudioSession = null;
~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(10,25): error TS2304: Cannot find name 'AVAudioSession'.
13 this.audioEngine = AVAudioEngine.new();
~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(13,24): error TS2304: Cannot find name 'AVAudioEngine'.
18 resolve(SFSpeechRecognizer.new().available);
~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(18,15): error TS2304: Cannot find name 'SFSpeechRecognizer'.
24 SFSpeechRecognizer.requestAuthorization((status: SFSpeechRecognizerAuthorizationStatus) => {
~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(24,7): error TS2304: Cannot find name 'SFSpeechRecognizer'.
24 SFSpeechRecognizer.requestAuthorization((status: SFSpeechRecognizerAuthorizationStatus) => {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(24,56): error TS2304: Cannot find name 'SFSpeechRecognizerAuthorizationStatus'.
25 if (status !== SFSpeechRecognizerAuthorizationStatus.Authorized) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(25,24): error TS2304: Cannot find name 'SFSpeechRecognizerAuthorizationStatus'.
29 AVAudioSession.sharedInstance().requestRecordPermission((granted: boolean) => {
~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(29,9): error TS2304: Cannot find name 'AVAudioSession'.
39 let locale = NSLocale.alloc().initWithLocaleIdentifier(options.locale);
~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(39,22): error TS2304: Cannot find name 'NSLocale'.
40 this.speechRecognizer = SFSpeechRecognizer.alloc().initWithLocale(locale);
~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(40,33): error TS2304: Cannot find name 'SFSpeechRecognizer'.
42 this.speechRecognizer = SFSpeechRecognizer.new();
~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(42,33): error TS2304: Cannot find name 'SFSpeechRecognizer'.
55 SFSpeechRecognizer.requestAuthorization((status: SFSpeechRecognizerAuthorizationStatus) => {
~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(55,7): error TS2304: Cannot find name 'SFSpeechRecognizer'.
55 SFSpeechRecognizer.requestAuthorization((status: SFSpeechRecognizerAuthorizationStatus) => {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(55,56): error TS2304: Cannot find name 'SFSpeechRecognizerAuthorizationStatus'.
56 if (status !== SFSpeechRecognizerAuthorizationStatus.Authorized) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(56,24): error TS2304: Cannot find name 'SFSpeechRecognizerAuthorizationStatus'.
61 this.audioSession = AVAudioSession.sharedInstance();
~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(61,29): error TS2304: Cannot find name 'AVAudioSession'.
62 this.audioSession.setCategoryError(AVAudioSessionCategoryRecord);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(62,44): error TS2304: Cannot find name 'AVAudioSessionCategoryRecord'.
63 this.audioSession.setModeError(AVAudioSessionModeMeasurement);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(63,40): error TS2304: Cannot find name 'AVAudioSessionModeMeasurement'.
64 this.audioSession.setActiveWithOptionsError(true, AVAudioSessionSetActiveOptions.NotifyOthersOnDeactivation);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(64,59): error TS2304: Cannot find name 'AVAudioSessionSetActiveOptions'.
66 this.recognitionRequest = SFSpeechAudioBufferRecognitionRequest.new();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(66,35): error TS2304: Cannot find name 'SFSpeechAudioBufferRecognitionRequest'.
82 (result: SFSpeechRecognitionResult, error: NSError) => {
~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(82,22): error TS2304: Cannot find name 'SFSpeechRecognitionResult'.
82 (result: SFSpeechRecognitionResult, error: NSError) => {
~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(82,56): error TS2304: Cannot find name 'NSError'.
93 this.audioSession.setCategoryError(AVAudioSessionCategoryPlayback);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(93,52): error TS2304: Cannot find name 'AVAudioSessionCategoryPlayback'.
94 this.audioSession.setModeError(AVAudioSessionModeDefault);
~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(94,48): error TS2304: Cannot find name 'AVAudioSessionModeDefault'.
107 this.inputNode.installTapOnBusBufferSizeFormatBlock(0, 1024, recordingFormat, (buffer: AVAudioPCMBuffer, when: AVAudioTime) => {
~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(107,96): error TS2304: Cannot find name 'AVAudioPCMBuffer'.
107 this.inputNode.installTapOnBusBufferSizeFormatBlock(0, 1024, recordingFormat, (buffer: AVAudioPCMBuffer, when: AVAudioTime) => {
~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(107,120): error TS2304: Cannot find name 'AVAudioTime'.
126 this.audioSession.setCategoryError(AVAudioSessionCategoryPlayback);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(126,42): error TS2304: Cannot find name 'AVAudioSessionCategoryPlayback'.
127 this.audioSession.setModeError(AVAudioSessionModeDefault);
~~~~~~~~~~~~~~~~~~~~~~~~~
node_modules/nativescript-speech-recognition/speech-recognition.ios.ts(127,38): error TS2304: Cannot find name 'AVAudioSessionModeDefault'.
Preparing project...
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.`
I using nativescript-vue 2. with nativescript 5.4 and build app on android 5.1. when i this.speechRecognition.startListening({ onResult: res => { console.log("res", res) }, onError: err => { console.log('1111'); } })
and speak any thing to micro but console.log
in code not have any thing in terninal.
Beside, when i use stopListening
or hide app back home, i receive error
System.err: com.tns.NativeScriptException: System.err: Calling js method run failed System.err: Error: java.lang.IllegalArgumentException: Service not registered: android.speech.SpeechRecognizer$Connection@85fc805 System.err: android.app.LoadedApk.forgetServiceDispatcher(LoadedApk.java:1141) System.err: android.app.ContextImpl.unbindService(ContextImpl.java:2254) System.err: android.content.ContextWrapper.unbindService(ContextWrapper.java:572) System.err: android.speech.SpeechRecognizer.destroy(SpeechRecognizer.java:408) System.err: com.tns.Runtime.callJSMethodNative(Native Method) System.err: com.tns.Runtime.dispatchCallJSMethodNative(Runtime.java:1203) System.err: com.tns.Runtime.callJSMethodImpl(Runtime.java:1083) System.err: com.tns.Runtime.callJSMethod(Runtime.java:1070) System.err: com.tns.Runtime.callJSMethod(Runtime.java:1050) System.err: com.tns.Runtime.callJSMethod(Runtime.java:1042) System.err: com.tns.gen.java.lang.Runnable.run(Runnable.java:17) System.err: android.os.Handler.handleCallback(Handler.java:739) System.err: android.os.Handler.dispatchMessage(Handler.java:95) System.err: android.os.Looper.loop(Looper.java:145) System.err: android.app.ActivityThread.main(ActivityThread.java:6917) System.err: java.lang.reflect.Method.invoke(Native Method) System.err: java.lang.reflect.Method.invoke(Method.java:372) System.err: com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1404) System.err: com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1199) System.err: File: "file:///data/data/com.dinhhoabkhn.dictionary/files/app/vendor.js, line: 17086, column: 27 System.err: StackTrace: System.err: Frame: function:'run', file:'file:///data/data/com.dinhhoabkhn.dictionary/files/app/vendor.js', line: 17086, column: 28 System.err: at com.tns.Runtime.callJSMethodNative(Native Method) System.err: at com.tns.Runtime.dispatchCallJSMethodNative(Runtime.java:1203) System.err: at com.tns.Runtime.callJSMethodImpl(Runtime.java:1083) System.err: at com.tns.Runtime.callJSMethod(Runtime.java:1070) System.err: at com.tns.Runtime.callJSMethod(Runtime.java:1050) System.err: at com.tns.Runtime.callJSMethod(Runtime.java:1042) System.err: at com.tns.gen.java.lang.Runnable.run(Runnable.java:17) System.err: at android.os.Handler.handleCallback(Handler.java:739) System.err: at android.os.Handler.dispatchMessage(Handler.java:95) System.err: at android.os.Looper.loop(Looper.java:145) System.err: at android.app.ActivityThread.main(ActivityThread.java:6917) System.err: at java.lang.reflect.Method.invoke(Native Method) System.err: at java.lang.reflect.Method.invoke(Method.java:372) System.err: at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1404) System.err: at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1199) System.err: Caused by: java.lang.IllegalArgumentException: Service not registered: android.speech.SpeechRecognizer$Connection@85fc805 System.err: at android.app.LoadedApk.forgetServiceDispatcher(LoadedApk.java:1141) System.err: at android.app.ContextImpl.unbindService(ContextImpl.java:2254) System.err: at android.content.ContextWrapper.unbindService(ContextWrapper.java:572) System.err: at android.speech.SpeechRecognizer.destroy(SpeechRecognizer.java:408) System.err: ... 15 more ActivityManager: cleanUpApplicationRecord -- 1256
It took me quite a while to find the error. I hope someone helps me solve my problem soon
Is it possible to develop with this plugin while on simulator? I believe we must test this on device, right?
On android, when leaving and resuming the app, the SpeechRecognizer stops to work (method startListening of plugin) and throw error "7".
I have add a console.log to onError listener in speech-recognition.android.js to see this error.
Also if after resuming app, I make a mute record (without speaking), onError listener throw error "6", and after that, the plugin restart to work !
(From official doc) :
Error 7 : ERROR_NO_MATCH
Error 6 : ERROR_SPEECH_TIMEOUT
If an error occurs during speech recognition (e.g., to force one, call "startListening" and remain silent), Android will invoke "onError". The current plugin implementation then outputs "Error: 'error-code' " to the console and invokes "reject". Because "resolve" has already been invoked within the Promise (at "onReadyForSpeech"), this has no effect, and the application has no way of knowing about the error.
Working on a video with the plugin and ran on Android and kept getting undefined in the onResult
callback in the startListening()
method with transcript.text === undefined
https://github.com/EddyVerbruggen/nativescript-speech-recognition/blob/master/speech-recognition.android.ts#L117
Running on Android 6.0.
Ended up changing it to the following just to get it working for now locally:
if (!transcripts.isEmpty()) {
let x;
for (var i = 0; i < transcripts.size(); i++) {
var transcript = transcripts.get(i);
x = transcripts.get(0);
}
options.onResult({
text: x,
finished: true
});
}
More of a question than a bug I think but hopefully you can help.
Is there any way to get the plugin to do continuous speech recognition listening for a keyword or only able to operate by push to talk for now? How should I handle enable/disable speech recognition while device is trying to play audio then re-enable after device response is done?
I am hoping to use this plugin in an app but having trouble getting it to run on a device while using NativeScript-texttospeech plugin to respond after keywords are recognized but after the texttospeech plugin starts responding, the app crashes. I believe due to input/output busses fighting over each other..
Thanks for your help!
I'm new to nativescript and typescript. If you don't have a sample conde in javascript. Do you offer any course on Udemy or one on one tutoring to allow me learn and nail the use of this plug-in?
Thanks and stay safe
Nativescript 5.2.3 trying to run Javascript demo code and get:
nativeException: java.lang.NullPointerException: Attempt to invoke virtual method 'android.content.pm.PackageManager android.content.Context.getPackageManager()' on a null object reference
when trying to run the available() call which was discovered with a catch after the then. Nothing else going on, about as basic as can be.
I wanted to confirm the issue was not running in an emulator which we typically use for development so attached my S9 and same thing.
How can this be resolved? Thanks!
I'm new to nativescript and typescript. If you don't have a sample conde in javascript. Do you offer any course on Udemy or one on one tutoring to allow me learn and nail the use of this plug-in?
Thanks and stay safe
Hi,
We are upgrading an app to N7 that uses the speech recognition plugin for transcribing what someone says. At the same time we record the audio of the speech using a different system. In order to do this successfully on the previous version on Nativescript we put the speech recognition functions in a worker.
Everything works fine until you try and stop the recognition. The app then immediately crashes. We have tracked down the crash to the line _this.recognitionRequest.endAudio(); in the stopListening method. If we remove this line, the app no longer crashes (but it also doesn't stop trying to do the recognition and therefore eventually hangs).
We have plugged it in to xCode and the error that seems to consistently come back is 'Cannot create a handle without a HandleScope'.
Do you know anything about this please? Can the recognizer no longer be used in a worker?
Thanks for the plugin, and any help you can provide!
I get this error when I execute tns run android on MacOS High Sierra 10.13.5:
$tns run android
Executing before-prepare hook from /Users/al/code/alfonso/nativescript/nativescript-speech-recognition/demo/hooks/before-prepare/nativescript-dev-typescript.js
Preparing project...
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Cycle link found.
Unable to apply changes on device: 09223ac50f22297b. Error is: Processing node_modules failed. Error: cp: cannot create directory '/Users/al/code/nativescript/nativescript-speech-recognition/demo/platforms/android/src/main/assets/app/tns_modules': No such file or directory.
$tns doctor output:
✔ Your ANDROID_HOME environment variable is set and points to correct directory.
✔ Your adb from the Android SDK is correctly installed.
✔ The Android SDK is installed.
✔ A compatible Android SDK for compilation is found.
✔ Javac is installed and is configured properly.
✔ The Java Development Kit (JDK) is installed and is configured properly.
✔ Xcode is installed and is configured properly.
✔ xcodeproj is installed and is configured properly.
✔ CocoaPods are installed.
✔ CocoaPods update is not required.
✔ CocoaPods are configured properly.
✔ Your current CocoaPods version is newer than 1.0.0.
✔ Python installed and configured correctly.
✔ The Python 'six' package is found.
✔ Getting NativeScript components versions information...
✔ Component nativescript has 4.1.2 version and is up to date.
⚠ Update available for component tns-core-modules. Your current version is 3.4.1 and the latest available version is 4.1.0.
⚠ Update available for component tns-android. Your current version is 3.1.1 and the latest available version is 4.1.3.
⚠ Update available for component tns-ios. Your current version is 3.1.0 and the latest available version is 4.1.1.
Is there any function in speech recognition which return true or false
if the microphone is already busy in call and speaker is busy in any music is playing is background ?
update plugin but no result
this plugin not working application iOS version 13, 13.1, 13.1.2, etc
When partial results are desired and, the finished
property should not return true
.
Hello,
I would suggest adding Angular "ChangeDetectionRef" class when the user triggers the startListening method. Otherwise the values would change out of sync. It took me a while to find the issue. I found the solution thanks to this link: https://blog.paulhalliday.io/2017/06/24/nativescript-speech-recognition/ where he implement that class to detect for changes.
Within "sendBackResults", I find the code fragment
for (let i = 0; i < transcripts.size(); i++) {
let transcript = transcripts.get(i);
}
Does it have a purpose (which I am missing) ?
Hi
I have a problem with the plugin in ios
the first time works well, but the second an error appears:
AVAEInternal.h:70:_AVAE_Check: required condition is false: [AVAEGraphNode.mm:851:CreateRecordingTap: (nullptr == Tap())]
I recently update to ios 12.
thanks
Not really an issue, more of a question. Do you think it is possible with this plugin to transcribe the audio as it is recording, and also save the audio itself to a file, so you have a recording of what the user originally said?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.