Giter VIP home page Giter VIP logo

bert_serving's People

Contributors

0xflotus avatar abhishekraok avatar aijunbai avatar ammarasmro avatar bogdandidenko avatar cbockman avatar craigcitro avatar eric-haibin-lin avatar georgefeng avatar imcaspar avatar jacobdevlin-google avatar jasonjpu avatar pengli09 avatar practicingman avatar qwfy avatar rodgzilla avatar stefan-it avatar zhaoyongke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

bert_serving's Issues

exporting extract

I need to export BERT model for tensorflow_serving where I can get the embeddings as outputs as shown in extract_features.py.

I followed your example of run_classifier.py and modified extract_features by adding flags 'do_export' and 'export_dir', added serving_input_fn and call estimator.export_savedmodel.

When I call my function

python extract_features_export.py --input_file=/tmp/input.txt --output_file=/tmp/output.jsonl --vocab_file=$BERT_BASE_DIR/vocab.txt --bert_config_file=$BERT_BASE_DIR/bert_config.json --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt --layers=-1,-2,-3,-4 --max_seq_length=128 --batch_size=8 --do_export=True --export_dir='/models'

the model runs but gives me an error when trying to export.

INFO:tensorflow:*** Example ***
INFO:tensorflow:unique_id: 0
INFO:tensorflow:tokens: [CLS] who was jim henson ? [SEP] jim henson was a puppet ##eer [SEP]
INFO:tensorflow:input_ids: 101 2040 2001 3958 27227 1029 102 3958 27227 2001 1037 13997 11510 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_type_ids: 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder.<locals>.model_fn at 0x7f0d1b3811e0>) includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmp3eh_rldt
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmp3eh_rldt', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f0d12f0b6d8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=2, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_cluster': None}
INFO:tensorflow:_TPUContext: eval_on_tpu True
WARNING:tensorflow:eval_on_tpu ignored because use_tpu is False.
INFO:tensorflow:Could not find trained model in model_dir: /tmp/tmp3eh_rldt, running initialization to predict.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Running infer on CPU
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
2018-12-23 09:33:37.252118: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-12-23 09:33:37.252483: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.797
pciBusID: 0000:01:00.0
totalMemory: 7.93GiB freeMemory: 7.09GiB
2018-12-23 09:33:37.252510: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2018-12-23 09:33:37.435350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-12-23 09:33:37.435377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2018-12-23 09:33:37.435382: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2018-12-23 09:33:37.435506: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6834 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:prediction_loop marked as finished
INFO:tensorflow:prediction_loop marked as finished
Traceback (most recent call last):
  File "extract_features_export.py", line 431, in <module>
    tf.app.run()
  File "/home/m/anaconda3/envs/gpu-dev/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "extract_features_export.py", line 423, in main
    estimator.export_savedmodel(FLAGS.export_dir, serving_input_fn, strip_default_attrs=True)
  File "/home/m/anaconda3/envs/gpu-dev/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 663, in export_savedmodel
    mode=model_fn_lib.ModeKeys.PREDICT)
  File "/home/m/anaconda3/envs/gpu-dev/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 789, in _export_saved_model_for_mode
    strip_default_attrs=strip_default_attrs)
  File "/home/m/anaconda3/envs/gpu-dev/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 878, in _export_all_saved_models
    raise ValueError("Couldn't find trained model at %s." % self._model_dir)
ValueError: Couldn't find trained model at /tmp/tmp3eh_rldt.

Any ideas on how I can get this fixed?

gRPC client issue

I exported the model using the suggested serving function (unique_ids instead of label_id), as below:
saved_model_cli show --all --dir /$model_saved_dire:

signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_ids'] tensor_info:
dtype: DT_INT32
shape: (-1, 320)
name: input_ids_1:0
inputs['input_mask'] tensor_info:
dtype: DT_INT32
shape: (-1, 320)
name: input_mask_1:0
inputs['segment_ids'] tensor_info:
dtype: DT_INT32
shape: (-1, 320)
name: segment_ids_1:0
inputs['unique_ids'] tensor_info:
dtype: DT_INT32
shape: (-1)
name: unique_ids_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['end_logits'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 320)
name: unstack:1
outputs['start_logits'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 320)
name: unstack:0
outputs['unique_ids'] tensor_info:
dtype: DT_INT32
shape: (-1)
name: unique_ids_1:0

RPC server started on 8500 successfully, however, when I call through RPC client, I got error:
url_port='0.0.0.0:8500'
channel = grpc.insecure_channel(url_port)
-- #data is prepared following the standard...
...
request.inputs['input_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(data.input_ids, shape=[ 320], dtype=tf.int32))
request.inputs['input_mask'].CopyFrom(tf.contrib.util.make_tensor_proto(data.input_mask, shape=[ 320], dtype=tf.int32))
request.inputs['segment_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(data.segment_ids, shape=[ 320], dtype=tf.int32))
request.inputs['unique_ids'].CopyFrom(tf.contrib.util.make_tensor_proto(data.unique_id, shape=[ 1], dtype=tf.int32))

predict_response = stub.Predict(request, timeout=10.)

ERROR:
raise _Rendezvous(state, None, None, deadline)
_Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.INVALID_ARGUMENT, unique_ids_1:0 is both fed and fetched.)>

My question:
is this because name ("unique_ids") conflict between input and output? how do I go around it?

Sample curl?

Hi, I'm wondering how to consume the model and I was wondering if you have any CURL example to consume BERT from Tensorflow Serve

can not export bert model

i do like your demo, but got this error
Traceback (most recent call last):
File "/opt/tiger/sunflowers/bert_test/bert/run_classifier.py", line 937, in
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/opt/tiger/sunflowers/bert_test/bert/run_classifier.py", line 860, in main
estimator.export_savedmodel(FLAGS.export_dir, serving_input_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 639, in export_savedmodel
mode=model_fn_lib.ModeKeys.PREDICT)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 765, in _export_saved_model_for_mode
strip_default_attrs=strip_default_attrs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 883, in _export_all_saved_models
mode=model_fn_lib.ModeKeys.PREDICT)
TypeError: _add_meta_graph_for_mode() got multiple values for keyword argument 'mode'

export_outputs error

hello, @bigboNed3 I used the bert_serving to produce savedmodel file, but occur one error below. I don't know how to fix it, do you have any good suggestions?

image

Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary

Anybody knows how to fix this problem:

Traceback (most recent call last):
File "request_queryclassifier_client.py", line 183, in
tf.app.run()
File "/data/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "request_queryclassifier_client.py", line 167, in main
result = stub.Predict(request, 1000.0) # 10 secs timeout
File "/data/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 533, in call
return _end_unary_response_blocking(state, call, False, None)
File "/data/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: bert/encoder/Cast = CastDstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _output_shapes=[[?,1,16]], _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: bert/encoder/Cast = CastDstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _output_shapes=[[?,1,16]], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]"
debug_error_string = "{"created":"@1553569490.204528585","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: bert/encoder/Cast = CastDstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _output_shapes=[[?,1,16]], _device="/job:localhost/replica:0/task:0/device:CPU:0". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).\n\t [[Node: bert/encoder/Cast = CastDstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _output_shapes=[[?,1,16]], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]","grpc_status":3}"

ps:the versions of tensorflow are both 1.12.0 in exporting model phase and inference phase.

Does anyone try to export an unfine-tuned BERT model?

Hi, I follow this repo to service bert model as language model. So I try to export the original BERT model as SavedModel but failed. The error messages are as follwoed:

Traceback (most recent call last):
File "export_lm_predictor.py", line 136, in
'./exported_model', serving_input_receiver_fn(max_seq_len, 20))
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 734, in export_saved_model
strip_default_attrs=True)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 663, in export_savedmodel
mode=model_fn_lib.ModeKeys.PREDICT)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 789, in _export_saved_model_for_mode
strip_default_attrs=strip_default_attrs)
File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 878, in _export_all_saved_models
raise ValueError("Couldn't find trained model at %s." % self._model_dir)
ValueError: Couldn't find trained model at ../bert_models/chinese_L-12_H-768_A-12.

I guess that there is no graph.pbtxt and checkpoint files in the original model dir. Does anyone have any ideas? Thanks!

[edit]
I specify the checkpoint_path parameter in export_saved_model function. By the way, I
create the estimator using tf.estimator.Estimator. Then I got a new error:
ValueError: Couldn't find 'checkpoint' file or checkpoints in given directory ../bert_models/chinese_L-12_H-768_A-12
So we must got checkpoint file in the original BERT model directory?

export error

Instructions for updating:
Pass your op to the equivalent parameter main_op instead.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.

Similar saved_model for run_squad?

Hi @bigboNed3
Do you have similar files to export BERT for SQUAD? I am new to TensoFlow and hence having difficulty figuring out what the serving function for SQUAD should be.
This will be a great help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.