Giter VIP home page Giter VIP logo

node-question-answering's Introduction

Question Answering for Node.js

npm version

Production-ready Question Answering directly in Node.js, with only 3 lines of code!

This package leverages the power of the 🤗Tokenizers library (built with Rust) to process the input text. It then uses TensorFlow.js to run the DistilBERT-cased model fine-tuned for Question Answering (87.1 F1 score on SQuAD v1.1 dev set, compared to 88.7 for BERT-base-cased). DistilBERT is used by default, but you can use other models available in the 🤗Transformers library in one additional line of code!

It can run models in SavedModel and TFJS formats locally, as well as remote models thanks to TensorFlow Serving.

Installation

npm install question-answering@latest

Quickstart

The following example will automatically download the default DistilBERT model in SavedModel format if not already present, along with the required vocabulary / tokenizer files. It will then run the model and return the answer to the question.

import { QAClient } from "question-answering"; // When using Typescript or Babel
// const { QAClient } = require("question-answering"); // When using vanilla JS

const text = `
  Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season.
  The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.
  As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as "Super Bowl L"), so that the logo could prominently feature the Arabic numerals 50.
`;

const question = "Who won the Super Bowl?";

const qaClient = await QAClient.fromOptions();
const answer = await qaClient.predict(question, text);

console.log(answer); // { text: 'Denver Broncos', score: 0.3 }

You can also download the model and vocabulary / tokenizer files separately by using the CLI.

Advanced

Using another model

The above example internally makes use of the default DistilBERT-cased model in the SavedModel format. The library is also compatible with any other DistilBERT-based model, as well as any BERT-based and RoBERTa-based models, both in SavedModel and TFJS formats. The following models are available in SavedModel format from the Hugging Face model hub thanks to the amazing NLP community 🤗:

To specify a model to use with the library, you need to instantiate a model class that you'll then pass to the QAClient:

import { initModel, QAClient } from "question-answering"; // When using Typescript or Babel
// const { initModel, QAClient } = require("question-answering"); // When using vanilla JS

const text = ...
const question = ...

const model = await initModel({ name: "deepset/roberta-base-squad2" });
const qaClient = await QAClient.fromOptions({ model });
const answer = await qaClient.predict(question, text);

console.log(answer); // { text: 'Denver Broncos', score: 0.46 }

Note that using a model hosted on Hugging Face is not a requirement: you can use any compatible model (including any from the HF hub not already available in SavedModel or TFJS format that you converted yourself) by passing the correct local path for the model and vocabulary files in the options.

Using models in TFJS format

To use a TFJS model, you simply need to pass tfjs to the runtime param of initModel (defaults to saved_model):

const model = await initModel({ name: "distilbert-base-cased-distilled-squad", runtime: RuntimeType.TFJS });

As with any SavedModel hosted in the HF model hub, the required files for the TFJS models will be automatically downloaded the first time. You can also download them manually using the CLI.

Using remote models with TensorFlow Serving

If your model is in the SavedModel format, you may prefer to host it on a dedicated server. Here is a simple example using Docker locally:

# Inside our project root, download DistilBERT-cased to its default `.models` location
npx question-answering download

# Download the TensorFlow Serving Docker image
docker pull tensorflow/serving

# Start TensorFlow Serving container and open the REST API port.
# Notice that in the `target` path we add a `/1`:
# this is required by TFX which is expecting the models to be "versioned"
docker run -t --rm -p 8501:8501 \
    --mount type=bind,source="$(pwd)/.models/distilbert-cased/",target="/models/cased/1" \
    -e MODEL_NAME=cased \
    tensorflow/serving &

In the code, you just have to pass remote as runtime and the server endpoint as path:

const model = await initModel({
  name: "distilbert-base-cased-distilled-squad",
  path: "http://localhost:8501/v1/models/cased",
  runtime: RuntimeType.Remote
});
const qaClient = await QAClient.fromOptions({ model });

Downloading models with the CLI

You can choose to download the model and associated vocab file(s) manually using the CLI. For example to download the deepset/roberta-base-squad2 model:

npx question-answering download deepset/roberta-base-squad2

By default, the files are downloaded inside a .models directory at the root of your project; you can provide a custom directory by using the --dir option of the CLI. You can also use --format tfjs to download a model in TFJS format (if available). To check all the options of the CLI: npx question-answering download --help.

Using a custom tokenizer

The QAClient.fromOptions params object has a tokenizer field which can either be a set of options relative to the tokenizer files, or an instance of a class extending the abstract Tokenizer class. To extend this class, you can create your own or, if you simply need to adjust some options, you can import and use the provided initTokenizer method, which will instantiate such a class for you.

Performances

Thanks to the native execution of SavedModel format in TFJS, the performance of such models is similar to the one using TensorFlow in Python.

Specifically, here are the results of a benchmark using question-answering with the default DistilBERT-cased model:

  • Running entirely locally (both SavedModel and TFJS formats)
  • Using a (pseudo) remote model server (i.e. local Docker with TensorFlow Serving running the SavedModel format)
  • Using the Question Answering pipeline in the 🤗Transformers library.

QA benchmark chart Shorts texts are texts between 500 and 1000 characters, long texts are between 4000 and 5000 characters. You can check the question-answering benchmark script here (the transformers one is equivalent). Benchmark run on a standard 2019 MacBook Pro running on macOS 10.15.2.

Troubleshooting

Errors when using Typescript

There is a known incompatibility in the TFJS library with some types. If you encounter errors when building your project, make sure to pass the --skipLibCheck flag when using the Typescript CLI, or to add skipLibCheck: true to your tsconfig.json file under compilerOptions. See here for more information.

Tensor not referenced when running SavedModel

Due to a bug in TFJS, the use of @tensorflow/tfjs-node to load or execute SavedModel models independently from the library is not recommended for now, since it could overwrite the TF backend used internally by the library. In the case where you would have to do so, make sure to require both question-answering and @tensorflow/tfjs-node in your code before making any use of either of them.

node-question-answering's People

Contributors

pierrci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-question-answering's Issues

Browser build request

Is it possible to put together a browser build for this? It would be really valuable in PWA production.

Thanks so much for your great projects!

NPM Install Error!

Its not installing uggg!!!!

npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: '[email protected]',
npm WARN EBADENGINE required: { node: '>=10 < 11 || >=12 <14' },
npm WARN EBADENGINE current: { node: 'v15.14.0', npm: '7.7.6' }
npm WARN EBADENGINE }
npm WARN EBADENGINE Unsupported engine {
npm WARN EBADENGINE package: '[email protected]',
npm WARN EBADENGINE required: { node: '>=10 < 11 || >=12 <14' },
npm WARN EBADENGINE current: { node: 'v15.14.0', npm: '7.7.6' }
npm WARN EBADENGINE }
npm WARN deprecated [email protected]: Please upgrade to @mapbox/node-pre-gyp: the non-scoped node-pre-gyp package is deprecated and only the @mapbox scoped package will recieve updates in the future
^A[ ] / reify:@tensorflow/tfjs-core: http fetch GET 200 https://registry.npmjs.org/@tensorflow/tfjs-cords[ ] - reify:@tensorflow/tfjs-core: http fetch GET 200 https://registry.npmjs.org/@tensorflow/tfjs-cornpm ERR! code 1
npm ERR! path /mnt/c/Users//OneDriveSchool/Documents/CODING/PERSONAL/MISC/ArticleParserTest/node_modules/tokenizers
npm ERR! command failed
npm ERR! command sh -c node-pre-gyp install
npm ERR! 403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-linux-x64-glibc.tar.gz
npm ERR! node-pre-gyp info it worked if it ends with ok
npm ERR! node-pre-gyp info using [email protected]
npm ERR! node-pre-gyp info using [email protected] | linux | x64
npm ERR! node-pre-gyp WARN Using request for node-pre-gyp https download
npm ERR! node-pre-gyp info check checked for "/mnt/c/Users/
/OneDriveSchool/Documents/CODING/PERSONAL/MISC/ArticleParserTest/node_modules/tokenizers/bin-package/index.node" (not found)
npm ERR! node-pre-gyp http GET https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-linux-x64-glibc.tar.gz
npm ERR! node-pre-gyp http 403 https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-linux-x64-glibc.tar.gz
npm ERR! node-pre-gyp ERR! install error
npm ERR! node-pre-gyp ERR! stack Error: 403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.7.0/index-v0.7.0-node-v88-linux-x64-glibc.tar.gz
npm ERR! node-pre-gyp ERR! stack at Request. (/mnt/c/Users//OneDriveSchool/Documents/CODING/PERSONAL/MISC/ArticleParserTest/node_modules/node-pre-gyp/lib/install.js:142:27)
npm ERR! node-pre-gyp ERR! stack at Request.emit (node:events:381:22)
npm ERR! node-pre-gyp ERR! stack at Request.onRequestResponse (/mnt/c/Users/
/OneDriveSchool/Documents/CODING/PERSONAL/MISC/ArticleParserTest/node_modules/request/request.js:1176:10)
npm ERR! node-pre-gyp ERR! stack at ClientRequest.emit (node:events:369:20)
npm ERR! node-pre-gyp ERR! stack at HTTPParser.parserOnIncomingClient [as onIncoming] (node:_http_client:646:27)
npm ERR! node-pre-gyp ERR! stack at HTTPParser.parserOnHeadersComplete (node:_http_common:129:17)
npm ERR! node-pre-gyp ERR! stack at TLSSocket.socketOnData (node:_http_client:512:22)
npm ERR! node-pre-gyp ERR! stack at TLSSocket.emit (node:events:369:20)
npm ERR! node-pre-gyp ERR! stack at addChunk (node:internal/streams/readable:313:12)
npm ERR! node-pre-gyp ERR! stack at readableAddChunk (node:internal/streams/readable:288:9)
npm ERR! node-pre-gyp ERR! System Linux 5.10.16.3-microsoft-standard-WSL2
npm ERR! node-pre-gyp ERR! command "/usr/local/bin/node" "/mnt/c/Users//OneDriveSchool/Documents/CODING/PERSONAL/MISC/ArticleParserTest/node_modules/.bin/node-pre-gyp" "install"
npm ERR! node-pre-gyp ERR! cwd /mnt/c/Users/
/OneDrivSchool/Documents/CODING/PERSONAL/MISC/ArticleParserTest/node_modules/tokenizers
npm ERR! node-pre-gyp ERR! node -v v15.14.0
npm ERR! node-pre-gyp ERR! node-pre-gyp -v v0.14.0
npm ERR! node-pre-gyp ERR! not ok

npm ERR! A complete log of this run can be found in:
npm ERR! /home/*/.npm/_logs/2021-05-27T17_22_58_874Z-debug.log

Cannot run another HuggingFace Model

I saved an official distilbert-base-multilingual-cased model in the tensorflow savedModel format with the code given :

import os
import tensorflow as tf
from transformers import (TFDistilBertModel, DistilBertConfig, DistilBertTokenizer, DistilBertModel)

model_name = "distilbert-base-multilingual-cased"
if not os.path.exists("./models/" + model_name):
    os.makedirs("./models/" + model_name)

tokenizer = DistilBertTokenizer.from_pretrained(model_name)
tokenizer.save_pretrained("./models/" + model_name)

pt_model = DistilBertModel.from_pretrained(model_name)
pt_model.save_pretrained("./models/" + model_name)
config = DistilBertConfig.from_json_file("./models/" + model_name +
                                         "/config.json")
model = TFDistilBertModel.from_pretrained("./models/" + model_name,
                                          from_pt=True,
                                          config=config)

concrete_function = tf.function(model.call).get_concrete_function([
    tf.TensorSpec([None, 384], tf.int32, name="input_ids"),
    tf.TensorSpec([None, 384], tf.int32, name="attention_mask")
])
tf.saved_model.save(
    model,
    "./models/" + model_name,
    signatures=concrete_function)
)

I then load the model and the tokenizer perfectly with

const model = await tf.node.loadSavedModel(path_to_distilbert);
const tokenizer = await BertWordPieceTokenizer.fromOptions({
    vocabFile: path.join(chemin, "vocab.txt"),
    lowercase: true,
  });

But when I want to predict to get simply the embeddings of a sentence with :

const result = tf.tidy(() => {
    const inputTensor = tf.tensor(tokens.ids, undefined, "int32");
    const maskTensor = tf.tensor(tokens.attentionMask, undefined, "int32");
    return model.predict({
      ["input_ids"]: inputTensor,
      ["attention_mask"]: maskTensor,
    }) as tf.NamedTensorMap;
  });
  console.log(result);

I got Incompatibles shapes :

Error: Session fail to run with error: [_Derived_]{{function_node __inference_call_4832}} {{function_node __inference_call_4832}} Incompatible shapes: [9,768] vs. [1,384,768]
         [[{{node distilbert/embeddings/add}}]]
         [[StatefulPartitionedCall/StatefulPartitionedCall]]
    at NodeJSKernelBackend.runSavedModel (/mnt/Documents/Projets/BotPress/R_D/R_D_MLjs_pipeline/sdk/node_modules/@tensorflow/tfjs-node/dist/nodejs_kernel_backend.js:1580:43)
    at TFSavedModel.predict (/mnt/Documents/Projets/BotPress/R_D/R_D_MLjs_pipeline/sdk/node_modules/@tensorflow/tfjs-node/dist/saved_model.js:347:52)
    at /mnt/Documents/Projets/BotPress/R_D/R_D_MLjs_pipeline/sdk/dist/test.js:31:22
    at /mnt/Documents/Projets/BotPress/R_D/R_D_MLjs_pipeline/sdk/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:3134:22
    at Engine.scopedRun (/mnt/Documents/Projets/BotPress/R_D/R_D_MLjs_pipeline/sdk/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:3144:23)
    at Engine.tidy (/mnt/Documents/Projets/BotPress/R_D/R_D_MLjs_pipeline/sdk/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:3133:21)
    at Object.tidy (/mnt/Documents/Projets/BotPress/R_D/R_D_MLjs_pipeline/sdk/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:7777:19)
    at test (/mnt/Documents/Projets/BotPress/R_D/R_D_MLjs_pipeline/sdk/dist/test.js:27:23)

I tried to expandDims, to remove the 384 in the concreteFunction, to same without concrete function, no way to get inference...

I tried with node 13.13.0

Thanks in advance, I know the problem is s abit borderline...

EDIT : By putting tf.TensorSpec([None, None], ...) I was able to load the model, I'm letting this for those who try to load a HuggingFace model in Nodejs.

Still it's not the way to do it, it works in this special case but I know it could be done in a cleaner way

End Logits and Start Logits exactly same values in Windows

Hi, when running the example in Windows, it returns "Denver" instead of "Denver Broncos".
I tested also in Linux and Mac, in both the result is correct.

I traced the error to this line:
https://github.com/huggingface/node-question-answering/blob/master/src/runtimes/saved-model.worker-thread.ts#L80

The startLogits array and the endLogits array always contains exactly the same values.
I tested also changing the context and question, and always reproduce the same behaviour: the logits are exactly the same.

Issue running node-question-answering

Hi! Thank you so much for all your incredible work with NLP. I love huggingface.

I tried running this and ran into issues installing.

node-pre-gyp WARN Using request for node-pre-gyp https download
node-pre-gyp WARN Tried to download(404): https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.9/fse-v1.2.9-node-v79-darwin-x64.tar.gz
node-pre-gyp WARN Pre-built binaries not found for [email protected] and [email protected] (node-v79 ABI, unknown) (falling back to source compile with node-gyp)
Traceback (most recent call last):
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py", line 50, in
sys.exit(gyp.script_main())
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/init.py", line 554, in script_main
return main(sys.argv[1:])
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/init.py", line 547, in main
return gyp_main(args)
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/init.py", line 532, in gyp_main
generator.GenerateOutput(flat_list, targets, data, params)
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/make.py", line 2215, in GenerateOutput
part_of_all=qualified_target in needed_targets)
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/make.py", line 802, in Write
self.WriteCopies(spec['copies'], extra_outputs, part_of_all)
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/make.py", line 1145, in WriteCopies
env = self.GetSortedXcodeEnv()
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/make.py", line 1885, in GetSortedXcodeEnv
additional_settings)
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/xcode_emulation.py", line 1616, in GetSortedXcodeEnv
additional_settings)
File "/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/pylib/gyp/xcode_emulation.py", line 1527, in _GetXcodeEnv
if XcodeVersion() >= '0500' and not env.get('SDKROOT'):
TypeError: '>=' not supported between instances of 'tuple' and 'str'

Errors on initial import

Created a blank project using Node v12.17.0 which now has ESM module support ready for primetime (no longer requires a flag) so I'm not using babel (which might be the problem?).

Getting the error:

import { QAClient } from 'question-answering';
         ^^^^^^^^
SyntaxError: The requested module 'question-answering' does not provide an export named 'QAClient'

Tried to do a quick debug and everything seems fine but as soon as I saw the typescript file I freaked out 😉 don't really have an expereince with it so unsure if this is the issue. Would be awesome if the package worked with vanilla Node ESM module imports though.

So I switched to commonjs const { QAClient } = require("question-answering"); and now I'm getting the error:

internal/modules/cjs/loader.js:1188
  return process.dlopen(module, path.toNamespacedPath(filename));
                 ^

Error: The specified module could not be found.
\\?\L:\Training datasets\New folder\node_modules\@tensorflow\tfjs-node\lib\napi-v5\tfjs_binding.node
    at Object.Module._extensions..node (internal/modules/cjs/loader.js:1188:18)
    at Module.load (internal/modules/cjs/loader.js:986:32)
    at Function.Module._load (internal/modules/cjs/loader.js:879:14)
    at Module.require (internal/modules/cjs/loader.js:1026:19)
    at require (internal/modules/cjs/helpers.js:72:18)
    at Object.<anonymous> (L:\Training datasets\New folder\node_modules\@tensorflow\tfjs-node\dist\index.js:58:16)
    at Module._compile (internal/modules/cjs/loader.js:1138:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)
    at Module.load (internal/modules/cjs/loader.js:986:32)
    at Function.Module._load (internal/modules/cjs/loader.js:879:14)

Even without any code and just the require.

There's mention of this in #7 but no conclusion.

And I have run npx question-answering download.

Also tried 2.x and 3.x versions.

How to test this library?

How can I run this code to test this project?

import { QAClient } from "question-answering";

const text = `
  Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season.
  The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.
  As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as "Super Bowl L"), so that the logo could prominently feature the Arabic numerals 50.
`;

const question = "Who won the Super Bowl?";

const qaClient = await QAClient.fromOptions();
const answer = await qaClient.predict(question, text);

console.log(answer); // { text: 'Denver Broncos', score: 0.3 }

Do you have a sample js file to test this library?

Module not found

Compilation issue - (while just importing import { QAClient } from 'question-answering'; OR
import { BertWordPieceTokenizer } from "tokenizers";

./node_modules/tokenizers/bindings/native.js
Module not found: Can't resolve '../bin-package' in '/ui/node_modules/tokenizers/bindings'

package.json

{
"name": "search-ui",
"version": "0.1.0",
"private": true,
"devDependencies": {
"@babel/core": "^7.8.4",
"react-scripts": "^3.4.0"
},
"dependencies": {
"node-fetch": "^2.6.0",
"question-answering": "^1.3.0",
"react": "^16.12.0",
"react-dom": "^16.12.0",
"searchkit": "beta",
"searchkit-express": "^0.2.1"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}

QAing on a larger corpus?

Hi. Awesome project. So fun.
Wondering what is the technique to ask a question to a larger corpus? More like hundreds of documents versus a short text snippet as in the sample code. As it seems to take longer and get less accurate the more text I supply - i'm wondering if there's another technique to work with a larger corpus? Filter first using TF-IDF and then run this QA only on the returned documents?

Thx.

Windows Support

npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

npm ERR! [email protected] install: node-pre-gyp install
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

SyntaxError: await is only valid in async function

my code:

const { QAClient } = require("question-answering"); // If using vanilla JS

const text = `
  Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season.
  The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.
  As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as "Super Bowl L"), so that the logo could prominently feature the Arabic numerals 50.
`;

const question = "Who won the Super Bowl?";

const qaClient = await QAClient.fromOptions();
const answer = await qaClient.predict(question, text);

console.log(answer); // { text: 'Denver Broncos', score: 0.3 }

Error :

SyntaxError: await is only valid in async function
at wrapSafe (internal/modules/cjs/loader.js:1063:16)
at Module._compile (internal/modules/cjs/loader.js:1111:27)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1167:10)
at Module.load (internal/modules/cjs/loader.js:996:32)
at Function.Module._load (internal/modules/cjs/loader.js:896:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47

403 error on install

How to reproduce:

  • Start a new project using node 14
  • run npm install question-answering@latest

Expected behavior: Exit code of 0, all is fine
Actual behavior:

image

Is there multilanguage support?

I'm searching for a QA solution in french. All the docs and the community seems to be using it in english.

Any pointers would be welcome!

UnhandledPromiseRejectionWarning: Error: Cannot find module '../bin-package'

Weird error here, 2 different projects and 1 doesn't have this bin-package folder. Running node v12.18.4 if anyone has any ideas?

(node:93725) UnhandledPromiseRejectionWarning: Error: Cannot find module '../bin-package'
Require stack:

  • /Users/admindave/Source/heroku-fetchlet/fetchlet/node_modules/tokenizers/bindings/native.js
  • /Users/admindave/Source/heroku-fetchlet/fetchlet/node_modules/tokenizers/bindings/decoders.js
  • /Users/admindave/Source/heroku-fetchlet/fetchlet/node_modules/tokenizers/implementations/tokenizers/bert-wordpiece.tokenizer.js
  • /Users/admindave/Source/heroku-fetchlet/fetchlet/node_modules/tokenizers/implementations/tokenizers/index.js
  • /Users/admindave/Source/heroku-fetchlet/fetchlet/node_modules/tokenizers/index.js
  • /Users/admindave/Source/heroku-fetchlet/fetchlet/node_modules/question-answering/dist/qa.js
  • /Users/admindave/Source/heroku-fetchlet/fetchlet/node_modules/question-answering/dist/index.js
  • /Users/admindave/Source/heroku-fetchlet/fetchlet/controllers/delete.js
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:965:15)
    at Function.Module._load (internal/modules/cjs/loader.js:841:27)
    at Module.require (internal/modules/cjs/loader.js:1025:19)
    at require (internal/modules/cjs/helpers.js:72:18)
    at Object. (/Users/admindave/Source/heroku-fetchlet/fetchlet/node_modules/tokenizers/bindings/native.js:1:16)
    at Module._compile (internal/modules/cjs/loader.js:1137:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10)
    at Module.load (internal/modules/cjs/loader.js:985:32)
    at Function.Module._load (internal/modules/cjs/loader.js:878:14)
    at Module.require (internal/modules/cjs/loader.js:1025:19)

403 error

Hello,

First thanks for your work, looks great. Wish I could try it but :

I am stuck trying your package on the first step :

npx question-answering download

I have this issue raising:

node-pre-gyp ERR! stack Error: 403 status code downloading tarball https://tokenizers-releases.s3.amazonaws.com/node/0.4.1/index-v0.4.1-node-v57-darwin-x64-unknown.tar.gz

It could be a problem on my side like my ISP blocking, but I doubt it. And I am not using any proxies. Am I the only one experiencing this?

Not working for me at all.

I tried both on Windows and Ubuntu. But the example code is not working for me at all. Getting errors like

The requested module 'question-answering' does not provide an export named 'QAClient'
.fromOptions is not a function
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

List of other models?

I tried downloading other models like

bert-uncased
bert-cased
bert-base-uncased
bert-large-uncased
bert-large-uncased-whole-word-masking-finetuned-squad

None of them worked and gave errors like below:

npx question-answering download bert-uncased --format saved_model
cli.js download [model]

Download a model (defaults to distilbert-cased)

Positionals:
  model                                   [string] [default: "distilbert-cased"]

Options:
  --version    Show version number                                     [boolean]
  --help       Show help                                               [boolean]
  --dir        The target directory to which download the model
                                                 [string] [default: "./.models"]
  --format     Format to download              [string] [default: "saved_model"]
  --force, -f  Force download of model and vocab, erasing existing if already
               present                                                 [boolean]

Error: The requested model doesn't seem to exist
    at Object.downloadModel [as handler] (C:\Users\keki\node_modules\question-answering\scripts\cli.js:91:13)
    at processTicksAndRejections (internal/process/task_queues.js:94:5)


Where can I get the list of valid model names to download?

Adding GPT-2 to the list?

Great work. I'm wondering if GPT-2 model is in your list of models to be added here? Otherwise, a tutorial on adding more models will be appreciated.

error when trying to run

I get this error. Any ideas

node server.js

C:\Projects\SQuAD\node_modules\tokenizers\bindings\tokenizer.js:4
static fromString = native.tokenizer_Tokenizer_from_string;
^

SyntaxError: Unexpected token =
at Module._compile (internal/modules/cjs/loader.js:723:23)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
at Function.Module._load (internal/modules/cjs/loader.js:585:3)
at Module.require (internal/modules/cjs/loader.js:692:17)
at require (internal/modules/cjs/helpers.js:25:18)
at Object. (C:\Projects\SQuAD\node_modules\tokenizers\implementations\tokenizers\bert-wordpiece.tokenizer.js:9:21)
at Module._compile (internal/modules/cjs/loader.js:778:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)

Can't resolve '/bin-package' via webpack

I'm trying to use this module for a chrome extension using webpack. When I run npx webpack I receive the following error:
ERROR in ./node_modules/question-answering/node_modules/tokenizers/bindings/native.js Module not found: Error: Can't resolve '/bin-package' in '....\node_modules\question-answering\node_modules\tokenizers\bindings' @ ./node_modules/question-answering/node_modules/tokenizers/bindings/native.js 1:15-38 @ ./node_modules/question-answering/node_modules/tokenizers/bindings/decoders.js @ ./node_modules/question-answering/node_modules/tokenizers/implementations/tokenizers/bert-wordpiece.tokenizer.js @ ./node_modules/question-answering/node_modules/tokenizers/implementations/tokenizers/index.js @ ./node_modules/question-answering/node_modules/tokenizers/index.js @ ./node_modules/question-answering/dist/qa.js @ ./node_modules/question-answering/dist/index.js

Is this module compatible with webpack?
P.S. If not, how can I run huggingface models using TFJS?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.