Giter VIP home page Giter VIP logo

swiftspeech's Introduction

SwiftSpeech

Speech Recognition Made Simple

A few lines of code to do this!

Recognize your user's voice elegantly without having to figure out authorization and audio engines.

SwiftSpeech Examples

Aside from the readme, the best way to learn more about SwiftSpeech and how speech recognition capabilities are implemented in apps like WeChat is to check out my new project SwiftSpeech Examples. For now, it contains a WeChat voice message interface mock and the three demos in SwiftSpeech.

WeChat

Features

SwiftSpeech is a wrapper for Apple's Speech framework with deep SwiftUI and Combine integration.

  • UI control + speech recognition functionality in just several lines of code.
  • Customizable cancelling.
  • SwiftUI style reactive APIs and Combine support.
  • Highly customizable but also keeping your code highly reusable via a composable structure.
  • Fully open low-level APIs.

Installation

Swift Package Manager (Recommended)

In Xcode, select Add Packages... from the File menu and enter the following package URL:

https://github.com/Cay-Zhang/SwiftSpeech

CocoaPods

pod 'SwiftSpeech'

Getting Started

1. Authorization

Although SwiftSpeech takes care of all the verbose stuff of authorization for you, you still have to state the usage descriptions and specify where you want the authorization process to happen before you start to use it.

Usage Descriptions in Info.plist

If you haven't, add these two rows in your Info.plist: NSSpeechRecognitionUsageDescription and NSMicrophoneUsageDescription.

These are the messages your users will see on their first use, in the alerts that ask them for permission to use speech recognition and to access the microphone.

Here's an exmample:

<key>NSSpeechRecognitionUsageDescription</key>
<string>This app uses speech recognition to convert your speech into text.</string>
<key>NSMicrophoneUsageDescription</key>
<string>This app uses the mircrophone to record audio for speech recognition.</string>

Request Authorization

Place SwiftSpeech.requestSpeechRecognitionAuthorization() where you want the request to happen. A common location is inside an onAppear modifier. Common enough that there is a snippet called Request Speech Recognition Authorization on Appear exposed in the Xcode Modifiers library.

.onAppear {
    SwiftSpeech.requestSpeechRecognitionAuthorization()
}

2. Try some demos

You can now start to try out some light-weight demos bundled with the framework using Xcode preview. Click the "Preview on Device" button to try the demo on your device.

static var previews: some View {
    // Two of the demo views below can take a `localeIdentifier: String` as an argument.
    // Example locale identifiers:
    // 简体中文(**)= "zh_Hans_CN"
    // English (US) = "en_US"
    // 日本語(日本)= "ja_JP"
    
    Group {
        SwiftSpeech.Demos.Basic(localeIdentifier: yourLocaleString)
        SwiftSpeech.Demos.Colors()
        SwiftSpeech.Demos.List(localeIdentifier: yourLocaleString)
    }
}

Here are the "previews" of your previews:

Demos

3. Build it yourself

Knowing what this framework can do, you can now start to learn about the concepts in SwiftSpeech.

Inspect the source code of SwiftSpeech.Demos.Basic. The only new thing here is this:

SwiftSpeech.RecordButton()                                        // 1. The View Component
    .swiftSpeechRecordOnHold(sessionConfiguration:animation:distanceToCancel:)  // 2. The Functional Component
    .onRecognizeLatest(update: $text)                             // 3. SwiftSpeech Modifier(s)

There are three parts here (and luckily, you can customize every one of them!):

  1. The View Component: A View that is only responsible for UI.
  2. The Functional Component: A component that handles user interaction and provides the essential functionality of speech recognition. In the built-in one here, the first two arguments let you specify the configuration for the recording session (locales and more) and an animation used when the user interacts with the View Component. The third argument sets the distance the user has to swipe up in order to cancel the recording. The framework also provides another Functional Component: .swiftSpeechToggleRecordingOnTap(sessionConfiguration:animation:).
  3. SwiftSpeech Modifier(s): One or more components allowing you to receive and manipulate the recognition results. They can be stacked together to create powerful effects.

For now, you can just use the built-in View Component and Functional Component. Let's explore some SwiftSpeech Modifiers first since every app handles its data differently:

Important: Chaining multiple or identical SwiftSpeech Modifiers together doesn't override any behavior. All actions of the modifiers will be executed in the order where the closest to the Functional Component executes first and the farthest executes last.

// 1
// All three demos use these modifiers.
// Inspect the source code of them if you want examples!
.onRecognizeLatest(
    includePartialResults: Bool = true,
    handleResult: (SwiftSpeech.Session, SFSpeechRecognitionResult) -> Void,
    handleError: (SwiftSpeech.Session, Error) -> Void
)

.onRecognize(
    includePartialResults: Bool = true,
    handleResult: (SwiftSpeech.Session, SFSpeechRecognitionResult) -> Void,
    handleError: (SwiftSpeech.Session, Error) -> Void
)

// This one simply assigns the recognized text to the binding in `handleResult` and ignores errors.
.onRecognizeLatest(
    includePartialResults: Bool = true,
    update: Binding<String>
)

// This one prints the recognized text and ignores errors.
.printRecognizedText(includePartialResults: Bool = true)

The first group of modifiers encapsulates the core value of SwiftSpeech. It does all the publisher transformation and subscription for you and calls the closures with enough information to facilitate a sophisticated task when a recognition result is yielded.

onRecognizeLatest ignores recognition results from the last recording session (if any) when a new session is started, while onRecognize subscribes to results from every recording session.

In handleResult, the first closure parameter is a SwiftSpeech.Session, which has a unique id for every recording. Use it to distinguish the recognition result from one recording and that from another.

The second is a SFSpeechRecognitionResult, which contains rich information about the recognition. Not only the recognized text (result.bestTranscription.formattedString), but also interesting stuff like speaking rate and pitch!

In handleError, you will handle the errors produced in the recognition process and also during the initialization of the recording session (such as a microphone activation failure).

// 2
.onStartRecording(appendAction: (SwiftSpeech.Session) -> Void)
.onStopRecording(appendAction: (SwiftSpeech.Session) -> Void)
.onCancelRecording(appendAction: (SwiftSpeech.Session) -> Void)

The second group gives you utter control over the whole lifespan of a SwiftSpeech.Session. It runs the provided closures after a recording was started/stopped/cancelled. Inside the closures, you will have access to the corresponding SwiftSpeech.Session, which is discussed below.

// 3
// `SwiftSpeech.ViewModifiers.OnRecognize` uses these modifiers.
// Inspect the source code of it if you want examples!
.onStartRecording(sendSessionTo: Subject)
.onStopRecording(sendSessionTo: Subject)
.onCancelRecording(sendSessionTo: Subject)

The third group might be useful if you prefer a reactive programming style. The only new argument here is a Combine.Subject (e.g. CurrentValueSubject and PassthroughSubject) and the modifier will send the corresponding SwiftSpeech.Session to the Subject after a recording is started/stopped/cancelled.

SwiftSpeech.Session

Configuration

A session can be configured using a SwiftSpeech.Session.Configuration struct. A configuration contains information such as the locale, the task hint, custom phrases to recognize, options for on-device recognition, and audio session configurations. Inspect SwiftSpeech.Session.Configuration for more details.

Customized Subscription to Recognition Results

If the built-in onRecognize(Latest) modifiers do not satisfy your needs, you can subscribe to recognition results via onStart/Stop/CancelRecording.

A Session publishes its recognition results via its resultPublisher. It has an Output type of SFSpeechRecognitionResult and an Failure type of Error.

You will receive a completion event when the Session finishes processing the user's voice (i.e. result.isFinal == true), an error happens, or you have explicitly called the cancelRecording() on the session.

A Session also has a convenient publisher called stringPublisher that maps the results to the recognized string.

Independent Use

Here's an example of using Session to recognize user's voice and receive updates.

let session = SwiftSpeech.Session(configuration: SwiftSpeech.Session.Configuration(locale: Locale(identifier: "en-US"), contextualStrings: ["SwiftSpeech"]))
try session.startRecording()
session.stringPublisher?
    .sink { text in
        // do something with the text
    }
    .store(in: &cancelBag)

For more, please refer to the documentation of SwiftSpeech.Session.

Customized View Components

A View Component is a dedicated View for design. It does not react to user interaction directly, but instead reacts to its environments, allowing developers to only focus on the view design and making the view more composable. User interactions are handled by the Functional Component.

Inspect the source code of SwiftSpeech.RecordButton (again, it's not a Button since it doesn't respond to user interaction). You will notice that it doesn't own any state or apply any gestures. It only responds to the two variables below.

@Environment(\.swiftSpeechState) var state: SwiftSpeech.State
@SpeechRecognitionAuthStatus var authStatus

Both are pretty self-explanatory: the first one represents its current state of recording, and the second one indicates the authorization status of speech recognition.

Here are more details of SwiftSpeech.State:

enum SwiftSpeech.State {
    /// Indicating there is no recording in progress.
    /// - Note: It's the default value for `@Environment(\.swiftSpeechState)`.
    case pending
    /// Indicating there is a recording in progress and the user does not intend to cancel it.
    case recording
    /// Indicating there is a recording in progress and the user intends to cancel it.
    case cancelling
}

authStatus here is a SFSpeechRecognizerAuthorizationStatus. You can also use $authStatus for a short hand of authStatus == .authorized.

Combined with a Functional Component and some SwiftSpeech Modifiers, hopefully, you can build your own fancy record systems now!

Support SwiftSpeech Modifiers

The library provides two general functional components that add a gesture to the view it modifies and perform speech recognition for you:

// They already support SwiftSpeech Modifiers.
func swiftSpeechRecordOnHold(
    sessionConfiguration: SwiftSpeech.Session.Configuration = SwiftSpeech.Session.Configuration(),
    animation: Animation = SwiftSpeech.defaultAnimation,
    distanceToCancel: CGFloat = 50.0
) -> some View

func swiftSpeechToggleRecordingOnTap(
    sessionConfiguration: SwiftSpeech.Session.Configuration = SwiftSpeech.Session.Configuration(),
    animation: Animation = SwiftSpeech.defaultAnimation
)

If you decide to implement a view that involves a custom gesture other than a hold or a tap, you can also support SwiftSpeech Modifiers by adding a delegate and calling its methods at the appropriate time:

var delegate = SwiftSpeech.FunctionalComponentDelegate()

For guidance on how to implement a custom view for speech recognition, refer to ViewModifiers.swift and SwiftSpeechExamples. It is not that hard, really.

License

SwiftSpeech is available under the MIT license.

swiftspeech's People

Contributors

cay-zhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swiftspeech's Issues

install issues

when i put this install url,i got error“Unable to find a specification for 'SwiftSpeech'.”

would you please tell me how can i install it through cocoapods?

IOS 16 beta 2

Worked fine up to iOS 15.6. I tested it on IOS 16 beta 2 on iPhone 12 Pro Max and I always get Thread 1: Fatal error : recordingSession is nil in EndRecording() in the following function when I release the speech button.

fileprivate func endRecording() {
guard let session = recordingSession else { preconditionFailure("recordingSession is nil in (#function)") }
recordingSession?.stopRecording()
delegate.onStopRecording(session: session)
self.viewComponentState = .pending
self.recordingSession = nil
}

SwiftSpeech and Other Languages

I know from the examles that SwiftSpeech can handle all supported languages. But I don't see how to implement this functionality. I gather from the example that I must add like this for Hebrew:

public init(locale: Locale = .autoupdatingCurrent) { self.locale = locale } public init(localeIdentifier: String) { self.locale = Locale(identifier: "he-IL") }

but I don't understand how to use this setting in SwiftSpeech:

`Text (text)
.onAppear {
SwiftSpeech.requestSpeechRecognitionAuthorization()
}

         SwiftSpeech.RecordButton()
                .swiftSpeechRecordOnHold(sessionConfiguration: .init(audioSessionConfiguration:     .playAndRecord))
            .onRecognize { _, result in
                
                text = result.bestTranscription.formattedString
               
                self.text = text
                if text == word {                 // word from array, checking pronunciation
                    
                    playRightSound()
                            }

                else {
                   
                    playWrongSound()
                
                        }
                    
                        } handleError: { _, _ in }
            
                }
        
        }
    
      }`

I appreciate your help!

Graceful notification of microphone activation failure

Hi!

I've encountered an exception due to a bug in iOS 14 beta 4 + AirPods that breaks (on software level, hopefully) mic on AirPods. In system apps (iMessage, Recorder ...) the issue prevents voice recording/recognition from working but it does not crash an app. In case of SwiftSpeech, the app crashes with uncaught exception.

Is is possible to catch such a failure and gracefully pass notification with the error? Or at least prevent the crash.

The log message is:
Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)'

Speech Recognition string does not match a (hard coded) string

I assign the speech recognition string to a @State var speechRecogText and I check if the other hardcoded string private var text contains the string from the speech recognition. This works properly and prints contains... in English. But with the Arabic Language, it does not work.

@State var speechRecogText: String = ""
private var text: String = "قل"

if textFieldText.contains(speechRecogText) {
            print("contaions voice text")
        } else {
            print("doesn't contain voice text")
        }

Console
// doesn't contain voice text

However when I try to swap the variables like this:

if speechRecogText.contains(textFieldText) {
            print("contains text")
        } else {
            print("doesnt contain text")
        }

Console
// contains text

What might be the reason for this? Does it have to do anything with the Language or how Strings actually behave?

Possible volume conflict between SwiftSpeech and AVAudioPlayer?

My app involves both SwiftSpeech's features and sound effects through SwiftySound. All sound effects work fine in the simulator, but on the device all sound effects stop working once the SwiftSpeech button is pressed. I have one button that makes a "click" noise when pressed. It works until I press the SwiftSpeech button.

If I press the SwiftSpeech button first, I get a sound effect the first time, but then not for subsequent presses.

I made a new, simple project without SwiftSpeech just to test the sound, and everything worked fine on the device. I also switched out SwiftySound and used the normal AVAudioPlayer procedure, and the sound works that way, too.

So the only thing I can think of is that there is a conflict between SwiftSpeech and the sound effects. Is it possible that SwiftSpeech is turning off my sound effects? If so, how do I turn them back on? My code appears below:

`import SwiftUI
import SwiftSpeech
import SwiftySound
import AVFoundation
import AudioToolbox

// VIEW MODEL
struct ContentView: View {

let emojiArray = ["🐵","🦍","🐶","🐺","🦊","🦝","🐱","🦁","🐅","🐴","🦓","🦌","🐮","🐷","🐐","🐪","🦙","🦒","🐘","🦏","🦛","🐁","🐀","🐰","🦇","🐻","🐨","🐼","🦘","🦃","🐔","🐧","🦅","🦆","🦢","🦉","🦚","🐸","🐊","🐢","🦎","🐍","🐳","🐬","🐟","🐙","🐌","🦋","🐜","🐝","🐞","🦗","🕷","🦂","🦟"]

@State private var emoji = ""
@State private var nextEmoji = ""
@State private var text = "What is this? (Press and Hold)"
@State private var theDescription = ""
@State var isCorrect:Bool
@State var player = AVAudioPlayer()

var body: some View {
    
    ZStack(alignment:.top) {
    VStack(alignment: .center) {
        
        Text (emoji).font(.system(size: 200, weight: .bold, design: .default))
            .onAppear() {
                emoji = emojiArray.randomElement() ?? "none"
               
                theDescription = emoji.applyingTransform(.toUnicodeName, reverse: false) ?? "None"
                print (theDescription) // get the emoji's unicode name
            }

    Text (text)
            .onAppear {
                SwiftSpeech.requestSpeechRecognitionAuthorization()
            }
      .padding()
        
        SwiftSpeech.RecordButton()
        }
            .swiftSpeechRecordOnHold()
            .onRecognize { _, result in
                text = result.bestTranscription.formattedString
                print (text)
                self.text = text
                if theDescription.contains(self.text.uppercased()) == true {
                    
                    print ("That's right")
                    text = "That's right!"
                    isCorrect = true
                    playRightSound() 
                 
                }
                else {print ("That's wrong")
                    text = "Try again!"
                    isCorrect = false
                    playWrongSound()
                }
                    
            } handleError: { _, _ in }
    
    Spacer()
        
    Button("Change Animal") {
        nextEmoji = emojiArray.randomElement() ?? "none"
        while nextEmoji == emoji {
            
            nextEmoji = emojiArray.randomElement() ?? "none"
            
        }
        
        playClickSound()
        emoji = nextEmoji
        text = "What is this? (Press and hold)"
        theDescription = emoji.applyingTransform(.toUnicodeName, reverse: false) ?? "None"
        print (theDescription)
            }
  
        }
        
    }

func playRightSound(){

print ("Playing right sound")

Sound.play(file:"yay.wav")

       }

 func playWrongSound() {
    
 Sound.play(file:"raspberry.wav")
    
}

func playClickSound() {
    
    Sound.play(file:"click.wav")
   
}

struct ContentView_Previews: PreviewProvider {
static var previews: some View {
    
    ContentView(isCorrect:true)
    
}

}

}

`

Automatic stop of recording after some seconds of silence

Hello ✌️ Thank you for such a wonderful library!

In my app I wanted to implement something similar to dictation button in Safari search:

  1. User taps button
  2. User speaks
  3. When user is not speaking for about 2 seconds, dictation stops automatically
RPReplay_Final1698318345.MP4

This way user doesn't need to tap on a button again to stop dictation. There are built-in methods swiftSpeechRecordOnHold and swiftSpeechToggleRecordingOnTap, but both of them need additional interaction from user. Also there was a need for different button.

Here is how I solved this, maybe this will be helpful for somebody in the future. Will be happy to hear any comments on how this can be done better:


import SwiftUI
import SwiftSpeech

// Creating new extension with custom record button view
public extension SwiftSpeech {
    struct RecordButtonCustom: View {
        public var body: some View {
            RecordButtonView()
        }
    }
}

// Define new EnvironmentKey for custom state
struct DictationState: EnvironmentKey {
    static let defaultValue: SwiftSpeech.State = .pending
}

// Define new Environment Values for custom state
extension EnvironmentValues {
    var dictationState: SwiftSpeech.State {
        get {
            self[DictationState.self]
        }
        set {
            self[DictationState.self] = newValue
        }
    }
}

struct SwiftSpeechView: View {
    @State private var text = "Tap to Speak"
    @State private var timer: Timer?
    @State var dictationState: SwiftSpeech.State = .pending
    
    var body: some View {
        VStack() {

            Text(text)
            
            SwiftSpeech
                .RecordButtonCustom()
                .swiftSpeechToggleRecordingOnTap(locale: Locale(identifier: "en_US"))
                .onRecognizeLatest(
                    includePartialResults: true,
                    handleResult: { session, result in
                        text = result.bestTranscription.formattedString
                        
                        timer?.invalidate()
                        // initiate timer to stop recording after 2 seconds of silence
                        timer = Timer.scheduledTimer(withTimeInterval: 2.0, repeats: false) { timer in
                            session.stopRecording()
                            dictationState = .pending
                        }
                    },
                    handleError: { session, error in
                        text = "Error \((error as NSError).code)"
                    
                        session.stopRecording()
                        dictationState = .pending
                })
                .onStartRecording { session in
                    dictationState = .recording
                }
                .onStopRecording { session in
                    dictationState = .pending
                }
                .onCancelRecording{ session in
                    dictationState = .cancelling
                }

        }
        .onAppear {
            SwiftSpeech.requestSpeechRecognitionAuthorization()
        }
        .environment(\.dictationState, dictationState)
    }
}

#Preview {
    SwiftSpeechView()
}
import SwiftUI
import SwiftSpeech

struct RecordButtonView: View {
    
    @Environment(\.dictationState) var state: SwiftSpeech.State
    
    public init() { }
    
    var icon: String {
        switch state {
        case .pending:
            return "mic"
        case .recording:
            return "mic.fill"
        case .cancelling:
            return "xmark"
        }
    }
    
    public var body: some View {
        Button("Dictate", systemImage: icon, action: {
            print("Dictate")
        })
        .buttonStyle(.borderless)
        .labelStyle(.iconOnly)
        .help("Dictate")
    }
    
}

#Preview {
    RecordButtonView()
}

My stack:
Xcode 15.1 beta
visionOS 1.0

Unable to use with TextToSpeech

Firstly, the library is awesome.

But I bumped an issue when trying to use this with the text to speech.

Simply you can also reproduce the issue with the following code;

import AVFoundation
func onSpeechToTextEnded() {
   let utterance = AVSpeechUtterance(string: "Hello world")
   utterance.voice = AVSpeechSynthesisVoice(language: "en-GB") 

   let synthesizer = AVSpeechSynthesizer()
   synthesizer.speak(utterance)
}

if I try to call this function (onSpeechToTextEnded) before actually using this library, I can hear the voice.
But when I try calling this function to hear some voices it is now working.

Can you investigate the issue please

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.