Medical algorithms, such as decision tree approaches to healthcare management, are useful tools for standardizing responses to events or emergencies, and help to reduce uncertainty. In emergency situations, reading a bedhead algorithm can interfere with the management of the patient. Mobile apps exist for following a decision tree, but still require physical interaction. This project aims to integrate voice controls into a new general app for medical algorithms.
Enabling voice recognition requires two parts, Google Cloud Speech to Text or Amazon Transcribe for voice recognition and LUIS.ai for Natural Language Processing.
Requires API key obtained from Google Cloud Speech to Text, to be entered in the app itself.
Requires access key and API key from AWS, to be entered in the app itself. Needs to include a custom vocabulary named "medical" to enhance recognition of specific keywords such as laryngectomy. Example available (amazontranscribe-customvocab.txt) in the root of this repository.
The MedicAlgo.json
file available in the root of the repo describes and needs to imported into a LUIS.ai app. Requires appId, key and endpoint to be filled in src/main/java/com/mbaxajl3/medicalgo/controllers/NLUController.java
Testing is split into two parts, local and instrumented unit tests. The latter automates UI testing and forms the main bulk of unit testing.
To generate code coverage of instrumented unit tests in androidTest, run .\gradlew createDebugCoverageReport
Will take >10mins to run, and may fail sometimes.