Comments (4)
Hi @dopc,
Could you share the error you are getting?
Some parts from the transformer architecture might be missing (e.g. softmax). They will be supported soon.
For now you can check the use case available on LLMs. It's using GPT2: https://github.com/zama-ai/concrete-ml/tree/main/use_case_examples/llm.
from concrete-ml.
Hey @jfrery, thanks for your quick reply.
Good to see GPT2 model works! I will look at it.
Here is my error trace:
File "/opt/conda/lib/python3.10/site-packages/concrete/ml/torch/compile.py", line 303, in compile_torch_model
return _compile_torch_or_onnx_model(
File "/opt/conda/lib/python3.10/site-packages/concrete/ml/torch/compile.py", line 195, in _compile_torch_or_onnx_model
quantized_module = build_quantized_module(
File "/opt/conda/lib/python3.10/site-packages/concrete/ml/torch/compile.py", line 117, in build_quantized_module
numpy_model = NumpyModule(model, dummy_input_for_tracing)
File "/opt/conda/lib/python3.10/site-packages/concrete/ml/torch/numpy_module.py", line 45, in __init__
self.numpy_forward, self._onnx_model = get_equivalent_numpy_forward_from_torch(
File "/opt/conda/lib/python3.10/site-packages/concrete/ml/onnx/convert.py", line 145, in get_equivalent_numpy_forward_from_torch
torch.onnx.export(
File "/opt/conda/lib/python3.10/site-packages/torch/onnx/utils.py", line 504, in export
_export(
File "/opt/conda/lib/python3.10/site-packages/torch/onnx/utils.py", line 1529, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/opt/conda/lib/python3.10/site-packages/torch/onnx/utils.py", line 1111, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/opt/conda/lib/python3.10/site-packages/torch/onnx/utils.py", line 987, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/opt/conda/lib/python3.10/site-packages/torch/onnx/utils.py", line 891, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/opt/conda/lib/python3.10/site-packages/torch/jit/_trace.py", line 1184, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/opt/conda/lib/python3.10/site-packages/torch/jit/_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1225, in forward
outputs = self.distilbert(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 806, in forward
embeddings = self.embeddings(input_ids, inputs_embeds) # (bs, seq_length, dim)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 129, in forward
input_embeds = self.word_embeddings(input_ids) # (bs, max_seq_length, dim)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1182, in _slow_forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 160, in forward
return F.embedding(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.FloatTensor instead (while checking arguments for embedding)
from concrete-ml.
Hey my bad I missed your answer. Concrete ML doesn't support embedding layer. We will support that very soon.
If you don't mind, we can convert your issue to a Feature request to make sure we have the embedding supported asap.
from concrete-ml.
Thanks for the answer and for converting the issue to a feature request.
Looking forward to the support.
from concrete-ml.
Related Issues (20)
- [Question] How exactly the HybridFHE functions HOT 1
- [Question] FHE inference over a single image time HOT 2
- installation error HOT 7
- [Question] Discord link in explanation HOT 2
- High accuracy variance during the training with SGDClassifier HOT 1
- Feature Request : Implement LogSoftmax, Softmax, ReduceMax HOT 3
- Performance Issues HOT 1
- Two consecutive Unsqueeze operations in QAT model throws error at compilation time HOT 2
- LLVM symbolizer error with LogisticRegression example HOT 15
- [Question] What HE algorithm is used? HOT 6
- [Question] AssertionError: Values must be float if value_is_float is set to True, got int64: [1] HOT 3
- AssertionError: Values must be float if value_is_float is set to True, got int64: [[[[102 14 188 ... 85 205 46] HOT 3
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! HOT 7
- Python 3.12 HOT 1
- Adding encrypted training for other ML models and DL models HOT 5
- quantized_module.forward() occured an error in "execute" mode HOT 8
- Feature request: Support Unfold torch operator HOT 10
- LLVM symbolizer error when running FHE in 'execute' mode HOT 6
- [Question] How does ReLU work in the new NN example HOT 5
- Feature request: support more padding method HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from concrete-ml.