Giter VIP home page Giter VIP logo

jovotech / jovo-framework Goto Github PK

View Code? Open in Web Editor NEW
1.7K 60.0 310.0 56.76 MB

๐Ÿ”ˆ The React for Voice and Chat: Build Apps for Alexa, Messenger, Instagram, the Web, and more

Home Page: https://www.jovo.tech

License: Apache License 2.0

JavaScript 1.43% TypeScript 98.24% HTML 0.29% Handlebars 0.04%
amazon-alexa alexa google-assistant voice-assistant voice voice-applications jovo-framework facebook-messenger chatbots conversational-ai

jovo-framework's Introduction

Jovo Framework: The React for Voice and Chat Apps

NEWS: We just launched Jovo v4

Jovo Framework

Website - Docs - Marketplace - Template

Build conversational and multimodal experiences for the web, Alexa, Google Assistant, Messenger, Instagram, Google Business Messages, mobile apps, and more. Fully customizable and open source. Works with TypeScript and JavaScript.

@Component()
export class LoveHatePizzaComponent extends BaseComponent {
  START() {
    return this.$send(YesNoOutput, { message: 'Do you like pizza?' });
  }

  @Intents(['YesIntent'])
  lovesPizza() {
    return this.$send({ message: 'Yes! I love pizza, too.', listen: false });
  }

  @Intents(['NoIntent'])
  hatesPizza() {
    return this.$send({ message: `That's OK! Not everyone likes pizza.`, listen: false });
  }
}

Getting Started

Learn more in our Getting Started Guide.

Install the Jovo CLI:

$ npm install -g @jovotech/cli

Create a new Jovo project (find the v4 template here):

$ jovo new <directory>

Go into project directory and run the Jovo development server:

# Go into project directory (replace <directory> with your folder)
$ cd <directory>

# Run local development server
$ jovo run

# Press "." to open the Jovo Debugger

Sponsors

We're glad to be supported by respected companies and individuals in the voice-first and conversational AI industry. See our Open Collective to learn more.

Gold Sponsors

Silver Sponsors

Bronze Sponsors

Find all supporters in our BACKERS.md file.

Support Jovo on Open Collective

jovo-framework's People

Contributors

acerbisgianluca avatar aswetlow avatar dominik-meissner avatar error404notfound avatar fgnass avatar freisms avatar github-actions[bot] avatar gpalozzi avatar igx89 avatar jankoenig avatar jkcchan avatar jrglg avatar justwriteapps avatar kaankc avatar kouz75 avatar m-ripper avatar marcher357 avatar natrixx avatar nlueg avatar omenocal avatar palle-k avatar renatoalencar avatar rmtuckerphx avatar rubenaeg avatar sadlowskij avatar stammbach avatar stephen-wilcox avatar thebenforce avatar voice-first-ai avatar zach-morgan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jovo-framework's Issues

Error using newly created DynamoDB tables

I'm getting an error working with a newly created dynamoDB.

2017-11-02 18:18:03.604 (+00:00)	2a8333a9-bffa-11e7-a1e5-51dac69da33d	TypeError: Cannot read property 'userData' of null
    at /var/task/node_modules/jovo-framework/lib/jovo.js:226:37
    at DynamoDb.checkResourceNotFound (/var/task/node_modules/jovo-framework/lib/integrations/db/dynamoDb.js:328:13)
    at Response.<anonymous> (/var/task/node_modules/jovo-framework/lib/integrations/db/dynamoDb.js:209:22)
    at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:364:18)
    at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
    at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
    at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
    at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10

I've been trying to dig into this a bit to figure out what is happening and I'm not sure. When I run the same code locally using a webhook and local file store it's running fine. Please let me know what type of diagnostics I can do to troubleshoot the issue.

Edited to fix styling

app.t('key') and app.speechBuilder().t('key') returning 'undefined'

I am using [email protected]

Here is the app configuration:

const app = require('jovo-framework').Jovo;

exports.handler = function (event, context, callback) {

    let languageResources = require('./languageResources');

    let intentMap = {
        'AMAZON.HelpIntent': 'HelpIntent',
        'AMAZON.StopIntent': 'StopIntent',
        'AMAZON.CancelIntent': 'StopIntent',
        'given-name': 'name'
    };

    app.setConfig({
        requestLogging: true,
        responseLogging: true,
        saveUserOnResponseEnabled: true,
        i18n: {
            resources: languageResources,
            config: { returnObjects: true },
        },
    });

    app.handleRequest(event, callback, handlers);
    app.execute();
};

And the languageResources.json file:

{
    "en-US": {
        "translation": {
            "Test": "the answer",
            "Welcome": [
                {
                    "Message": "Welcome message one.",
                    "Hint": "Welcome message one hint.",
                    "Prompt": "How can I help?",
                    "Reprompt": "Welcome message one reprompt or ask for help for more options. What would you like?"
                }
            ],
            "WelcomeBackMessage": [
                "Welcome back!.",
                "Good to see you again."
            ],
            "WelcomeBackPrompt": [
                "What would you like to do?",
                "How can I help?",
                "What would you like?"
            ],
            "WelcomeBackReprompt": [
                "Welcome back one reprompt. You can ask for help for more options. What would you like?",
                "Welcome back two reprompt. I can help you find an urgent care near you. For more options, say 'Help'. What would you like?"
            ],
            "Hints": [
                "hint 1",
                "hint 2"
            ]
        }
    }
}

Here are the various responses that I get when I try to get to the resources:
app.t('Test')
"the answer"

app.t('Welcome')
undefined

app.t('WelcomeBackMessage')
undefined

app.t('WelcomeBackPrompt')
undefined

app.t('WelcomeBackReprompt')
undefined

app.t('Hints')
undefined

app.speechBuilder().t('Test').speech
"the answer"

app.speechBuilder().t('Welcome').speech
"undefined"

app.speechBuilder().t('WelcomeBackMessage').speech
"undefined"

app.speechBuilder().t('WelcomeBackPrompt').speech
"undefined"

app.speechBuilder().t('WelcomeBackReprompt').speech
"undefined"

app.speechBuilder().t('Hints').speech
"undefined"

app.speech.t('Test').speech
"the answer the answer the answer"

app.speech.t('Welcome').speech
"the answer the answer the answer undefined"

app.speech.t('WelcomeBackMessage').speech
"the answer the answer the answer undefined undefined"

app.speech.t('WelcomeBackPrompt').speech
"the answer the answer the answer undefined undefined undefined"

app.speech.t('WelcomeBackReprompt').speech
"the answer the answer the answer undefined undefined undefined undefined"

app.speech.t('Hints').speech
"the answer the answer the answer undefined undefined undefined undefined undefined"

๐ŸŒ Improve setLanguageResources method

Right now setLanguageResources can only be called from within the lambda or webhook.post building blocks.

Adding a default language (en-US) would allow to call it from anywhere and add it to the setConfig method.

Also, allow to use a config object as a parameter for additional i18next configurations, e.g.

app.setLanguageResources(languageResources, { returnObjects: true });

Inputs in DialogFlowV2 requests are plain strings (not objects)

I uploaded the sample action to Dialogflow. After switching to the V2 API the handler functions no longer receive objects as arguments but plain strings:

{
  'MyNameIsIntent': function(name) {
    this.tell('Hello ' + name.value); // <-- name is a string, not an object!
  }
}

My guess is that the code in dialogFlowV2Request.js:39 should do something similar as in the V1 implementation and create an object with name, value and key properties.

As I'm completely new to Jovo, I might be missing something obvious here. If not and you can confirm that this is a bug I could create a PR with a fix.

Error when testing the skill

Hi,

I am trying to run the helloworld project but jovo seems to have a problem. When I test from the test console I get:
The requested skill took too long to respond

While the logs in the commandline show:

jovo run

Example server listening on port 3000!
This is your webhook url: https://webhook.jovo.cloud/160dd151-ac4d-488f-92da-93c522732efe
Unhandled Rejection at: Promise Promise {
  <rejected> Error: Something went wrong
    at IncomingMessage.res.on (/usr/local/lib/node_modules/jovo-cli/commands/run.js:153:28)
    at IncomingMessage.emit (events.js:185:15)
    at IncomingMessage.emit (domain.js:422:20)
    at endReadableNT (_stream_readable.js:1106:12)
    at process._tickCallback (internal/process/next_tick.js:178:19) } reason: Error: Something went wrong
    at IncomingMessage.res.on (/usr/local/lib/node_modules/jovo-cli/commands/run.js:153:28)
    at IncomingMessage.emit (events.js:185:15)
    at IncomingMessage.emit (domain.js:422:20)
    at endReadableNT (_stream_readable.js:1106:12)
    at process._tickCallback (internal/process/next_tick.js:178:19)
Unhandled Rejection at: Promise Promise {
  <rejected> Error: Something went wrong
    at IncomingMessage.res.on (/usr/local/lib/node_modules/jovo-cli/commands/run.js:153:28)
    at IncomingMessage.emit (events.js:185:15)
    at IncomingMessage.emit (domain.js:422:20)
    at endReadableNT (_stream_readable.js:1106:12)
    at process._tickCallback (internal/process/next_tick.js:178:19) } reason: Error: Something went wrong
    at IncomingMessage.res.on (/usr/local/lib/node_modules/jovo-cli/commands/run.js:153:28)
    at IncomingMessage.emit (events.js:185:15)
    at IncomingMessage.emit (domain.js:422:20)
    at endReadableNT (_stream_readable.js:1106:12)
    at process._tickCallback (internal/process/next_tick.js:178:19)```

Question: webchat userId

Hey guys I've got a question regarding the user id and dialogflow.

Beside my voice Service for Google Assistant I created an additional webchat with the same logic / intents. But if for a webchat there is no user ID provided by dialogflow.
So my question is, can I set the userId somehow manually?

Thank you and best regards, Markus

Error updating the skill interaction model

Hello,

Thanks a lot (again) for providing the Jovo framework!

I have an issue deploying the interaction model to the Alexa console with the Jovo CLI.

Here is my model file fr-FR.json . (it's the example one for now).

    "invocation":"my test app",
    "intents":[
        {
            "name":"HelloWorldIntent",
            "phrases":[
                "hello",
                "say hello",
                "say hello world"
            ]
        },
        {
            "name":"MyNameIsIntent",
            "phrases":[
                "{name}",
                "my name is {name}",
                "i am {name}",
                "you can call me {name}"
            ],
            "inputs":[
                {
                    "name":"name",
                    "type":{
                        "alexa":"AMAZON.US_FIRST_NAME",
                        "dialogflow":"@sys.given-name"
                    }
                }
            ]
        }
    ]
}

I do jovo build then jovo deploy

And the Alexa console is not updated. Here are the logs :

     โœ” Updating Alexa Skill project for ASK profile default
       Skill Name: redacted (fr-FR)
       Skill ID: amzn1.ask.skill.99c3818c-f294-4be3-8fb1-XXX
       Invocation Name: my test app (fr-FR)
       Endpoint: https://webhook.jovo.cloud/abee97d9-7413-49f1-87fb-XXX
     โœ” Deploying Interaction Model, waiting for build
       โœ” fr-FR
   โœ” Deploying Google Action
     โœ” Creating file /googleAction/dialogflow_agent.zip
       Language model: fr-FR
       Fulfillment Endpoint: https://webhook.jovo.cloud/abee97d9-7413-49f1-87fb-XXX

  Deployment completed.

Unhandled Rejection at: Promise Promise { <rejected> undefined } reason: undefined

I tried removing the platforms folder and redo jovo build then jovo run
Here is what I get :

     โœ– Creating Alexa Skill project for ASK profile default
       -> Unexpected end of JSON input
       Deploying Interaction Model, waiting for build
     Deploying Google Action

Unexpected end of JSON input

Thanks for your help.

Can't run 'jovo run --watch' using 'npm run dev'

I've tried adding a dev script to my package.json file to alias jovo run --watch to npm run dev but it never works. Every single time I get the following error.
screen shot 2018-03-30 at 3 04 37 pm

And here's the log file the screenshot it referencing:

/Users/ekramer/.npm/_logs/2018-03-30T19_04_35_339Z-debug.log 0 info it worked if it ends with ok
1 verbose cli [ '/Users/ekramer/.nvm/versions/node/v8.9.4/bin/node',
1 verbose cli   '/Users/ekramer/.nvm/versions/node/v8.9.4/bin/npm',
1 verbose cli   'run',
1 verbose cli   'dev' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'predev', 'dev', 'postdev' ]
5 info lifecycle [email protected]~predev: [email protected]
6 info lifecycle [email protected]~dev: [email protected]
7 verbose lifecycle [email protected]~dev: unsafe-perm in lifecycle true
8 verbose lifecycle [email protected]~dev: PATH: /Users/ekramer/.nvm/versions/node/v8.9.4/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/Users/ekramer/Desktop/voice-assistants/tv-helper/node_modules/.bin:/Users/ekramer/.nvm/versions/node/v8.9.4/bin:/usr/share/www/intranet.directstartv.com/scripts/srcsync-dir:/usr/local/bin:/usr/share/www/devops/scripts:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/share/www/intranet.directstartv.com/scripts/srcsync-dir:/usr/share/www/devops/scripts:/usr/local/sbin
9 verbose lifecycle [email protected]~dev: CWD: /Users/ekramer/Desktop/voice-assistants/tv-helper
10 silly lifecycle [email protected]~dev: Args: [ '-c', 'jovo run --watch' ]
11 silly lifecycle [email protected]~dev: Returned: code: 1  signal: null
12 info lifecycle [email protected]~dev: Failed to exec dev script
13 verbose stack Error: [email protected] dev: `jovo run --watch`
13 verbose stack Exit status 1
13 verbose stack     at EventEmitter.<anonymous> (/Users/ekramer/.nvm/versions/node/v8.9.4/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:285:16)
13 verbose stack     at emitTwo (events.js:126:13)
13 verbose stack     at EventEmitter.emit (events.js:214:7)
13 verbose stack     at ChildProcess.<anonymous> (/Users/ekramer/.nvm/versions/node/v8.9.4/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
13 verbose stack     at emitTwo (events.js:126:13)
13 verbose stack     at ChildProcess.emit (events.js:214:7)
13 verbose stack     at maybeClose (internal/child_process.js:925:16)
13 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
14 verbose pkgid [email protected]
15 verbose cwd /Users/ekramer/Desktop/voice-assistants/tv-helper
16 verbose Darwin 16.7.0
17 verbose argv "/Users/ekramer/.nvm/versions/node/v8.9.4/bin/node" "/Users/ekramer/.nvm/versions/node/v8.9.4/bin/npm" "run" "dev"
18 verbose node v8.9.4
19 verbose npm  v5.7.1
20 error code ELIFECYCLE
21 error errno 1
22 error [email protected] dev: `jovo run --watch`
22 error Exit status 1
23 error Failed at the [email protected] dev script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 1, true ]

The weird part is if I run jovo run --watch normally without aliasing it to npm run dev it works....

error in handleElementSelectRequest function

if (elementId === BaseApp.UNHANDLED &&
        !this.config.handlers[
            BaseApp.REQUEST_TYPE_ENUM.ON_ELEMENT_SELECTED
            ][BaseApp.UNHANDLED]) {
        throw new Error('Error: ' + REQUEST_TYPE_ENUM.ON_ELEMENT_SELECTED + ' with elementId ' + this.getSelectedElementId() + ' has not been defined in the handler.');
}

In line 967 of jovo.js, BaseApp. needs to be added before REQUEST_TYPE_ENUM.ON_ELEMENT_SELECTED so as not to throw an error when using list templates! It's working for me now!

/usr/bin/env: โ€˜node\rโ€™: No such file or directory

Just installed jovo on Linux.
Get the error in subject line on any attempt to run 'jovo'.
Looks like Windows format line endings (CR-LF) in jovo.js

Removing the CR from the #! line is enough to get this ship underway...
Wish me calm seas ahead...

Dialogflow cuts string after parameter

Hey guys,
when I define phrases in the language model, the substring after the last input parameter is always lost in the dialogflow intent file.
The following phrase:

This is {inputX} a Test

will be resolved to

This is {inputX}

in the dialogflow intent after jovo build.
If I use a second input parameter eg.

This is {inputX} a {inputY} Test

it will be resolved up to the next parameter

This is {inputX} a {inputY}

Can you please help me with that issue?

Combine index.js and index_lambda.js

Currently the instructions recommend switching to index_lambda.js to deploy to lambda, but if you've developed locally it can be awkward to see the differences in index.js and index_lambda.js which are pertinent. In reality they are very few, so it would be useful to use an environment variable or local variable to branch the serving logic.

var isLocal = false;
// Listen for post requests
if(isLocal){
    webhook.listen(3000, function() {
        console.log('Local development server listening on port 3000.');
    });

    webhook.post('/webhook', function(req, res) {
        app.handleRequest(req, res, handlers);
        app.execute();
    });
} else {
   //Lambda
    exports.handler = function(event, context, callback) {
        app.handleRequest(event, callback, handlers);
        app.execute();
    };
}

๐Ÿ‘ค Add 'NEW_USER' default intent

I see myself using the following at the beginning of many intents to route the first time users to a different experience:

if (app.user().isNewUser()) { 
    app.toIntent('LAUNCH');
}

It would be great, additionally to the 'LAUNCH' intent, to have an intent called 'NEW_USER' which could be used to do some initial data formatting etc. before routing the users to a certain intent (either launch or deep intent if necessary).

Unit testing

Adding the ability to perform unit tests easily. Here's how I have tests setup now using @bespoken tools and ava. This isn't the exact code behind it but an overview of it and how I'm currently using it.

SkillMock API

All the credit goes to @bespoken, I just converted what they built to be promised based so it was easier for me to work with.

const skill = new SkillMock(/* app id */)

// Starts and stops Alexa and the lambda server
skill.start(), skill.stop()

// launches the voice assistant (aka 'Alexa, ', 'Hey Google')
skill.launched()

// ends the session
skill.sessionEnded(reason)

// run intent by name and pass slots into them
skill.intended(intent, slots)

// run intents by passing in the phrases they're associated with
skill.spoken(utterance)

// set the access token for the app
skill.setAccessToken(token)

// This one isn't apart of bespoken tools. It kicks off tests ability to test easily via the response 
// that's stored when `skill.launched`, `skill.intended`, `skill.spoken`, or `skill.sessionEnded`. 
// This just returns the class that I built to help with testing
skill.test(assertion)

Testing API
// skill.test() exposes this api
// All these methods and getter functions are chainable, and they all start from skill.test()
// here's a list of assertion methods that can be used in each context
.is(str) // Assert that the value is the same as the output speech
.truthy() // Assert that the response value is truthy
.falsy() // Assert that the response value is falsy
.not(str) // Assert that the value is not the same as the output speech
.matches(regex), .match(regex) // Assert that the output speech matches regex
.notMatches(regex), .notMatch(regex) // Assert that the output speech doesn't match regex
.startsWith(str) // Assert that the output speech starts with the string
.notStartsWith(str) // Assert that the output speech doesn't start with the string
.endsWith(str) // Assert that the output speech ends with the string
.notEndsWith(str) // Assert that the output speech doesn't end with the string
.includes(str), .contains(str) // Assert that the output speech contains string
.notIncludes(str), .notContains(str) // Assert that the output speech doesn't contain string
.ended() // Assertion to ensure the session has ended
.notEnded() // Assertion to ensure the session hasn't ended

// These getter methods will change the style. So if you want to test against ssml or plain text you have the option
.ssml // changes the context style to ssml (default)
.plain // changes the context style to plain

// These getter methods change the context to test different aspects of the voice app
// all these work with the methods above
.response // changes the context to response (default)
.reprompt // changes the context to reprompt

.sessionAttributes, .attr, .attributes // changes the context to attributes
  // on top of the default assertions here are some attr specific assertions
  .attr.keys(...keys), .attr.key(...keys) // Assert that the keys exist
  .attr.notKeys(...keys), .attr.notKey(...keys) // Assert that the key doesn't exists
  .attr.is(key, expected), .attr.value(key, expected) // Assert that the key value is same as expected
  .attr.not(key, expected), .attr.notValue(key, expected) // Assert that the key value isn't the same as expected
  .attr.type(key, type) // Assert that the key's value is same as type that was passed (`[]` = 'array', `null` = 'null')
  .attr.notType(key, type) // Assert that the key's value isn't the same as the type that was passed
  .attr.truthy(key) // Assert that the key's value is truthy
  .attr.falsy(key) // Assert that the key's value is falsy

.card // changes the context to cards
  card.title // changes the context to the cards title (default)
  card.type // changes the context to the cart type
  card.image.small, card.small // changes the context to small card image
  card.image.large, card.large // changes the context to small card image
  card.text // changes the content of the card

Here's a basic example of how it looks in a test

test('basic', async (t) => {
  const skill = new SkillMock()
  await skill.start()
  skill.setAccessToken(await mockAccessToken())

  await skill.launched() // Alexa, open [my app]
  await skill.spoken('what\'s my cashback balance')

  skill.test(t)
    .matches(/^your cashback balance is \$[0-9.]+\./) // ensure the response matches this format
    .reprompt
    .is('what else can i help you with?') // ensure the reprompt is exactly this

  await skill.stop()
})
alexa json response for basic
{
  "version": "1.0",
  "response": {
    "shouldEndSession": false,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak> your cashback balance is $10.20. what else can i help you with? </speak>"
    },
    "reprompt": {
      "outputSpeech": {
        "type": "SSML",
        "ssml": "<speak> what else can i help you with? </speak>"
      }
    }
  },
  "sessionAttributes": {
    "shopper_id": "112341234jlkajfasda2431asddf",
    "customer_id": "5559316",
    "distributor_id": "123412341234",
    "portal_id": 132241234,
    "language": "ENG",
    "country": "USA",
    "gender": "Male",
    "shopper_name": "John Doe",
    "STATE": "_MAIN"
  }
}

Here's more of an advanced use case

test('advanced', async (t) => {
  const expected_prompt = 'would you like to order this product?'
  const skill = new SkillMock()
  await skill.start()
  skill.setAccessToken(await mockAccessToken())

  await skill.launched() // Alexa, open [my app]
  await skill.spoken('search for {Xbox One X}')

  skill.test(t)
    .includes('Microsoft <break time="100ms" /> Xbox One X') // ensure there's a breaktime in there after the product name. It makes it sound better
    .plain // switch to plain context
      .matches(/is \$(?:[0-9]{2,}|[1-9]+).[0-9]{2} usd/i) // ensure the price is formatted correctly on the response
      .includes(expected_prompt) // ensure the prompt is apart of the initial response
    .reprompt // switch to the reprompt context
      .is(expected_prompt) // ensure the reprompt matches prompt
    .card // switch to the card context
      .matches(/is \$(?:[0-9]{2,}|[1-9]+).[0-9]{2} usd/i) // ensure the price is formatted correctly
    .text // switch to the `card.text` context
      .notIncludes(expected_prompt) // ensure the card text doesn't include the prompt message
    .image
      .small // switch to the small image context
        .truthy() // ensure the small image exists
        .matches(/__300x300__\.jpg$/i) // ensure the small image is the 300 size
      .large // switch to the large image context
        .truthy() // ensure the large image exists
        .matches(/__600x600__\.jpg$/i) // ensure the small image is the 600 size
    .attr // switch to the attributes context
      .truthy('product') // esnure the product key exists
      .type('product', 'object') // ensure the product key is an object

  await skill.stop()
})
alexa json response for advanced
{
  "version": "1.0",
  "response": {
    "shouldEndSession": false,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak> Microsoft <break time=\"100ms\" /> Xbox One X - is $499.00 usd. would you like to order this product? </speak>"
    },
    "reprompt": {
      "outputSpeech": {
        "type": "SSML",
        "ssml": "<speak> would you like to order this product? </speak>"
      }
    },
    "card": {
      "type": "Standard",
      "title": "Microsoft Xbox One X is $499.00 USD.",
      "image": {
        "smallImageUrl": "https://img.shop.com/Image/240000/243400/243416/products/1588637534__300x300__.jpg",
        "largeImageUrl": "https://img.shop.com/Image/240000/243400/243416/products/1588637534__600x600__.jpg",
      },
      "text": "Games play better on Xbox One X. Experience 40% more power than any other console 6 teraflops of graphical processing power and a 4K Blu-ray player provides more immersive gaming and entertainment Play with the greatest community of gamers on..."
    }
  },
  "sessionAttributes": {
    "shopper_id": "asdasdfas31sdq14gasdasfd",
    "customer_id": "12341242994",
    "portal_id": 1234123,
    "language": "ENG",
    "country": "USA",
    "gender": "Male",
    "shopper_name": "john doe",
    "STATE": "_SEARCH_RESULTS",
    "product": {
      "id": 112312341234,
      "category_id": 12342,
      "review_count": 341,
      "pricing": "499.00",
      "is_accessory": false,
      "prod_container_id": 1123412341234
      "merchant_sku": "33414",
      "product_category_id": 12341,
      "prod_id": "1588637534",
      "category_name": "Electronic",
      "rating": 5,
      "container_text": "Microsoft Xbox One X",
      "container_text_phonetic": "Microsoft <break time=\"100ms\" /> Xbox One X",
      "discount_percentage": 0,
      "description": "Games play better on Xbox One X. Experience 40% more power than any other console 6 teraflops of graphical processing power and a 4K Blu-ray player provides more immersive gaming and entertainment Play with the greatest community of gamers on...",
      "image": {
        "small": "https://img.shop.com/Image/240000/243400/243416/products/1588637534__300x300__.jpg",
        "large": "https://img.shop.com/Image/240000/243400/243416/products/1588637534__600x600__.jpg",
      },
      "on_sale": false,
      "locale_id": 10,
      "cashback": "0.47",
      "store_name": "Walmart",
      "currency_code": "USD"
    }
  }
}

Example of how my test files are setup

Real world use case
import SkillMock, { mockAccessToken } from '../../skill-mock'
import ava from 'ava-spec'
const test = ava.group('handlers:main:cashback')

test.beforeEach(async (t) => {
  // initialize the skill (aka `bst.LambdaServer` `bst.BSTAlexa`)
  t.context.skill = new SkillMock()
  // start the skill starts the lambda and alexa servers
  await t.context.skill.start()
  // set the access token for my app. We have our own service behind the scenes that generates it for us.
  t.context.skill.setAccessToken(await mockAccessToken())
})

test.afterEach(async (t) => {
  // after each test stop the skill (aka the lambda and alexa servers)
  await t.context.skill.stop()
})


test.group('CashbackBalanceCheck', (test) => {
  const utterance = 'what is my cashback'

  test('success', async (t) => {
    const skill = t.context.skill
    await skill.launched() // launch the skill (aka `Alexa, `, `Hey Google`,)
    await skill.spoken(utterance) // pass in the utterance that's being tested 
    // since I was already using ava I just pass in their assertion lib to the test class I wrote.
    skill.test(t)
      .plain // convert the response to plain text (strip out the ssml)
      .matches(/^your cashback balance is \$[0-9.]+\./) // check to see if the response matches this regex
      .reprompt // move on to the reprompt
      .is('what else can i help you with?')  // ensure the reprompt is exactly `'what else can i help you with?'`
  })

  test('no cashback', async (t) => {
    const skill = t.context.skill
    // I had to test to what no cashback response looked like which required me use 
    // a different email address so instead of using the initial `accessToken` I set a different one
    skill.setAccessToken(await mockAccessToken('[email protected]'))
    await skill.launched()
    await skill.spoken(utterance)
    skill.test(t)
      .plain
      .includes('you do not currently have any cashback') // the response includes this text
      .reprompt
      .is('what else can i help you with?') // the response is exactly this text
  })

  test('error', async (t) => {
    const skill = t.context.skill
    await skill.launched()
    // cause an error by setting shopper_id to be wrong
    skill.attributes.shopper_id = 'asdfasdfasdfawasdf'
    await skill.spoken(utterance)
    skill.test(t)
      .plain
      .is('something went wrong while trying to get your cashback balance. what else can i help you with?') // ensure the response is this exact text
  })
})

These don't cover every single thing you could test but I think it's a good starting point. I took the great work that bespoken has already done and tried to simplify it to make it a little more stream line for my use. Then I made the test interface to make my tests more readable, I modeled it after nixt which is a cli testing framework that simplifies testing cli tools. There's other things that could definitely be added but this covered most of my use cases.

Sending empty string to app.tell crashes server

I realise I shouldn't be doing this, but

app.tell(''). i.e. sending an empty string by mistake

causes

Unhandled Rejection at: Promise Promise {
Error: Invalid output text:
at Function.toSSML (/Users/dave/Documents/Projects/Jovo Projects/Recipes/node_modules/jovo-framework/lib/platforms/speechBuilder.js:130:19)

at Jovo.tell (/Users/dave/Documents/Projects/Jovo Projects/Recipes/node_modules/jovo-framework/lib/jovo.js:950:27)

Is there any way to gracefully catch this and not crash the server? or should this always be part of my code checking?

Add human-readable IntentLogging

As suggested by @peternann, there should be a nice and human-readable way to log incoming requests that include information that's helpful for debugging and testing (intents, states, and values), but less cluttered than just logging the full requests.

I usually do it like this, but this misses some of the information and displays other that might not be interesting:

app.setConfig({
    requestLogging: true,
    requestLoggingObjects: ['request'],
   // Other configurations
});

@peternann also suggested it to be auto-enabled.

Add multiple reprompts for Google Assistant

Google Assistant allows for multiple reprompts, for example:

app.ask(`Guess a number`,
  ['I didn\'t hear a number', 'If you\'re still there, what\'s your guess?',
    'We can stop here. Let\'s play again soon.']);

Cannot GET <id>

I installed the sample jovo app using jovo new. After doing jovo run and then using the suggested URL, I get a webpage that just says Cannot GET /366dba2c-c4b0-4b00-b143-76dd5040474a. What am I doing wrong?

Cannot fetch value of capitalized letters

When a parameter is defined in Dialogflow in all capitals, Jovo does not seem to be able to get the value of that parameter.
If debug logging is enabled, you can see the parameter and the value are passed, but when trying to call the parameter from within the function Intent': function(VARIABLE) or when using the getInputs() function e.g. let inputs = app.getInputs(VARIABLE);, VARIABLE returns "undefined"

Error using response variations on ask.

I enabled returnObjects: true for language resources and it caused an error. I had to change this:

static toSSML(text) {
    if (!text) {
        throw Error('Invalid output text: ' + text);
    }
    
    // my changes: check for array and randomly choose one.
    if (_.isArray(text)) {
        text = _.sample(text);
    }

    text = text.replace(/<speak>/g, '').replace(/<\/speak>/g, '');
    text = text.replace(/&/g, 'and');

    text = '<speak>' + text + '</speak>';
    return text;
}

I just wanted to have variations in my .ask( ... ) after a follow up state call. Am I using the ask() incorrectly for language resources that have an array of text options?

User Data Not Saved to DynamoDB

First time I am trying to use DynamoDB integration and save user info.

I am running lambda locally with: bst proxy lambda index.js

Configuration is:

    app.setConfig({
        requestLogging: true,
        responseLogging: true,
        saveUserOnResponseEnabled: true,
        i18n: {
            resources: languageResources,
            returnObjects: true,
        },
        db: {
            type: 'dynamodb',
            tableName: 'myskill-users-default',
            awsConfig: {
                region:  'us-east-1'
            }
        }
    });

And some handlers:

const handlers = {

    'NEW_USER': function () {
        app.user().data.launchCount = 0;
        app.toIntent(app.getIntentName());
    },

    'LAUNCH': function () {
        app.user().data.launchCount += 1;
        if (app.user().data.launchCount === 1) {
...
} else { }

An entry is added to DynamoDB but there is no launchCount variable:

{
  "userData": {
    "data": {},
    "metaData": {
      "createdAt": "2017-11-29T00:04:19.384Z",
      "lastUsedAt": "2017-11-29T00:07:18.534Z"
    }
  },
  "userId": "amzn1.ask.account.AE..."
}

Do I need to do anything else to persist and retrieve user data?

Resuming a song

First off - LOVE this framework and what you're doing with it - hoping I can contribute.

I am playing a song with

// play
app
  .alexaSkill()
  .audioPlayer()
  .setOffsetInMilliseconds(0)
  .play(url, alexaToken)
  .tell(speech);

which works great, but my app got declined because I need to handle the Pause and Resume amazon intents. Below is how I am pausing:

// pause
app
  .alexaSkill()
  .audioPlayer()
  .stop();
app.tell("paused");

Is there a current way to resume? Am looking through audioPlayer.js and not seeing anything.

Thanks!

Restructure input access

Right now, parameters only include the string value of a slot/entity.

To address new features like entity resolutions, let's switch the default to objects, not strings.
The values could be accessed like this:

'MyNameIsIntent': function(name) {
        app.tell('Hey ' + name.value + '! How is it going?');
    },

โœจ Add Interaction Model abstraction layer as JSON element

This allows developers to update their language models locally for both platforms.

Ideally, it should have one default abstracted language model JSON + 2 optional ones (Alexa, Google Assistant) to account for platform specific features, intents, and parameters.

Custom Request Logging on AWS Lambda

Hey guys,

I'm building a skill to help you find what streaming service you can watch a show and recommend shows based on another show.

How would I use something like app.onRequest() and app.onResponse() (as listed in the changelog of 1.0.3 here https://github.com/jovotech/jovo-framework-nodejs/blob/master/CHANGELOG.md) in a Lambda?

I know the default generated skill has a call to if(app.isWebhook()){ ... } and then outside of that block has a named export that makes a call to app.handleLambda() to handle the Lambda. How would I modify this to use app.onRequest() and app.onResponse() for custom response logging?

toIntent() improvement

On my initial use of toIntent(), I made a horrible assumption that it would also automatically route the inputs. So what I had to do was:

app.toIntent(app.getIntentName(), _.values(app.getInputs()));

Could we add something like:

toIntentWithInputs(intent) {
    toIntent(intent, _.values(this.inputs));
}

โœจ Create Option to Skip "State > Unhandled" for Some Intents

I want to be able to define intents that don't go to a state's "Unhandled" intent, when they're not found, but go directly into the global/stateless intent.

Examples could be a CancelIntent that's only used as a Global Intent. Right now, for every state I have to do the following:

'SomeState': {
    // other intents above

    'CancelIntent': function() {
        app.toStatelessIntent('CancelIntent');
    },

    'Unhandled': function() {
        // do something
    },
}

using app instead of this in indexGoogleAssistantCards example code?

Hi,
in the file
jovo-framework-nodejs/examples/google_action_specific/indexGoogleAssistantCards.js

in the handler for "ON_ELEMENT_SELECTED" it the 'this' element used, shouldn't that be the 'app' element?

well, I did not execute the code, just noticed from reviewing.

Thank you!
Best Regards
Michael

Add displayText for Google Assistant

For output on mobile phones, Google Assistant supports a displayText instead of the speech output. Here's an example from their documentation

function simpleResponse () {
  const app = new ActionsSdkApp({request, response});
  app.ask({
    speech: 'Howdy! I can tell you fun facts about ' +
    'almost any number, like 42. What do you have in mind?',
    displayText: 'Howdy! I can tell you fun facts about almost any ' +
    'number. What do you have in mind?'
  });
}

Chatbase Analytics

Chatbase class map missing .
app.addChatbaseAnalytics('4ab9ef9c-8dba-4812-9bad-931addaa0494', '1.2');

\jovo-framework\lib\integrations\analytics\analytics.js:65
this.services[name] = new classesMappingname;

Unable to save user data

Hi. I'm having an issue. For some reason, I can't save user data. I have made sure that the user is signed in. Although, I am running into a problem with that too. When I run

this.googleAction().askForSignIn()

nothing really happens. There should be a popup or something, but nope. Nothing. Anyways, I managed to get around that for now and I'm able to get my user object.

For some reason though when I use

this.user().data.testValue = "test"

It kind of saves the value. It saves it locally, but when you go and launch a new session and try to load it, there's nothing there. How do I fix this? What am I doing wrong? There's no error or anything in the logs about this.

Accessing the request sent to Alexa

Hello,

I'm using Jovo to build a skill for Alexa that tells you what streaming services (e.g Netflix, Hulu) a certain show is on.

I would like to be able to log the request sent to the Alexa API for my logging/debugging purposes.

How can I do this?

This is the request that the Alexa API says was sent one of my queries

{
	"body": {
		"version": "1.0",
		"response": {
			"outputSpeech": {
				"type": "SSML",
				"ssml": "<speak>I'm now going to look up the flash</speak>"
			},
			"shouldEndSession": true
		},
		"sessionAttributes": {}
	}
}

i18next language resources - keys displayed

Hey!

I've got a question regarding the i18next language resources app.t('key'):

app.tell(app.t('WELCOME'));

const languageResources = { 'de-DE': { translation: { WELCOME: 'Willkommen', }, }, };

From Google Action Simulator:

output: 'Willkommen'

From Dialogflow Debugging

output: 'WELCOME'

What's the reason for this behaviour? I would expect that the output is the same.
Thanks for your great work!

Best regards

Error using newly created DynamoDB tables

I'm having the same issue as #36 . I created new tables in DynamoDB to take advantage of encryption-at-rest. The table I'm passing to setDynamoDb is encrypted, which shouldn't matter. Immediately after launching my Alexa skill, it throws the exception 'Cannot read property userData of null'.

Keep getting "'final_response' must be set" when testing Google Assistant

I got through the Hello World tutorial just fine, and thought I'd give the AdventureGame one a try. I'm using Google Assistant only (not Alexa) and have everything set up (at least well enough for the HelloWorld example to work!) but now whenever I try to use the Google Assistant simulator, I just get the response "My test app isn't responding right now. Try again soon."

and error message (under Validation Errors):

'final_response' must be set

Am I missing something? I have set up my BlueDoorIntent and RedDoorIntent as instructed, have the NodeJS server up and running, ngrok proxy URL put into the Fulfillment section on DialogFlow, set Fulfillment to "use webhook" on each Intent, etc.

๐ŸŒ Add speechBuilder.t method for i18n

Right now, you use the following for a combination of the Jovo speechBuilder and i18next:

let speech = app.speechBuilder()
    .addText(app.t(languageKey1))
    .addText(app.t(languageKey2));

It would be very convenient to implement another t-method for speechBuilder, that would work like so:


let speech = app.speechBuilder()
    .t(languageKey1)
    .t(languageKey2);

can't sign up for slack support

I don't know where else to post this.

I can't use slack. After filling our the sign-up form I am taken to a page that says "Get invited to our community Slack team exchange ideas with fellow Jovo developers. Sign up below:" But the only thing below that is an ad to create my own typeform. What am I doing wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.