Giter VIP home page Giter VIP logo

he-transformer's People

Contributors

fboemer avatar jopasserat avatar lnguyen-nvn avatar lorepieri8 avatar mlayou avatar r-kellerm avatar rsandler00 avatar sfblackl-intel avatar yxlao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

he-transformer's Issues

Improvements

  • Square op
  • Don't relinearize after plain op
  • Throw error if multiplicative depth is too large
  • Use in-place ops where possible
  • Delay encoding as long as possible
  • Client/server model
  • Use static_cast instead of dynamic_cast for runtime improvement?
  • NGRAPH_CHECK, see NervanaSystems/ngraph#2727

Error occurs with tensorflow vgg-16

I cloned the repository from https://github.com/machrisaa/tensorflow-vgg.git
I import ngraph_bridge and execute the python test script with NGRAPH_TF_BACKEND=HE_SEAL_CKKS
It occurs the error.

2019-06-08 00:26:18.242372: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_conversions.h:34] reshaping 2, 512, 7, 7 to 2, 7, 7, 512
 2019-06-08 00:26:18.242380: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:2249] maxpool outshape: {2, 7, 7, 512}
 2019-06-08 00:26:18.242385: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/fc6/Reshape which is Reshape
 2019-06-08 00:26:18.242391: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:3162] Input shape: 2, 7, 7, 512
 2019-06-08 00:26:18.242402: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:3167] Requested result shape: -1, 25088
 2019-06-08 00:26:18.242410: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/fc6/MatMul which is MatMul
 2019-06-08 00:26:18.242421: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/fc6/BiasAdd which is BiasAdd
 2019-06-08 00:26:18.242435: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/Relu which is Relu
 2019-06-08 00:26:18.242443: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/fc7/MatMul which is MatMul
 2019-06-08 00:26:18.242453: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/fc7/BiasAdd which is BiasAdd
 2019-06-08 00:26:18.242465: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/Relu_1 which is Relu
 2019-06-08 00:26:18.242473: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/fc8/MatMul which is MatMul
 2019-06-08 00:26:18.242483: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/fc8/BiasAdd which is BiasAdd
 2019-06-08 00:26:18.242495: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_builder.cc:4593] Constructing op content_vgg/prob which is Softmax
 2019-06-08 00:26:18.243018: I /home/kt.hur/work/he-transformer_clean/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_utils.cc:240] Serializing graph to: tf_function_ngraph_cluster_0.json

 Killed                               

HEBackend Vision

The HEBackend should be a tool for data scientists or researchers to run models homomorphically without worrying about parameter selection, noise budget, overflow or security.
For now, the use is restricted to running computation graphs. The vision would be: a scientist should train a model with an unencrypted backend, then test it on an encrypted backend.

Why not commercial?

  • Depends heavily on SEAL, which has only a

How to train and export the model?

  • Ngraph-neon is okay, but currently lacks support. Is also a little high-level. E.g., I can't override gradient op.
  • Tensorflow is okay, more lower-level support, i.e. I can over-ride gradient op. But, higher-level functionality like isn't supported.

How to deal with parameter selection?

  • SEAL's parameter selection tool only works with int-encoded things.
  • We can estimate parameters by using the (additive, multiplicative) depth of the computation graph, as well as the largest inputs.
    • Improve predictions over time?
  • Worst-case, we can run the desired computation and just double the poly-mod and plaintext-mod until it succeeds. We'd need to store the model across runs somehow.

Items to do:

  • Efficiency
    • Use in-place ops (add, multiply) if possible / helpful.
    • Investigate square op (instead of multiply(x,x))
  • Make sure all unit tests pass (int, float, plain, etc.)
    • Completely remove or completely support uint64_t.
  • Enable CPU checking by creating modified IntCallFrame generate_calls function
  • More ops? Convolution. Negate. Power(int), Subtract. Scaled mean-pool?
  • Clean code TODOs.
  • Client-server model to communicate?
  • Update to latest ngraph api.
  • Debug (CPU check, overflow/noise notification, private key) vs release version?
    • One-Hot not available in release version
  • Batching?!

Don't relinearize after multiply cipher-plain

This doesn't increase the size of the ciphertext.
From SEALExample weighted average:

In this example we demonstrate the FractionalEncoder, and use it to compute
a weighted average of 10 encrypted rational numbers. In this computation we
perform homomorphic multiplications of ciphertexts by plaintexts, which is
much faster than regular multiplications of ciphertexts by ciphertexts.
Moreover, such `plain multiplications' never increase the ciphertext size,
which is why we have no need for evaluation keys in this example.

Mnist runtime results - work in progress

We are testing inference on a heavily-quantized mnist 2-hidden layer fully-connected network and comparing to results when running the same computation on the CPU backend. The weights are very small, basically ternary (-2,-1,0,1,2). We are exporting the model from tensorflow to .js format. Even though the weights are integer, tensorflow exports them as floats, and so we treat them as floats (FractionalEncoder instead of IntegerEncoder).

Bad answer means the answers are reasonable-looking, but incorrect, integers. This is likely due to the plain_mod being too small, and some wrapping around the modulus in the computations. We saw similar behavior on the dot product unit-tests when we decreased the plain modulus to 4.

Desired answer is (2173, 944, 1151, 1723, -1674, 569, -1985, 9776, -4997, -1903)

The general approach is: find the smallest poly_mod for which there exists a plain mod large enough that the answer is correct, but small enough the noise budget isn't depleted.

Larger poly_mod and larger plain_mod each make computations slower.

machine batch size poly mod coeff mod plain mod threads re-linearize Time Result
bdw13 1 4096 128-bit security 512 20 N 4'42" Bad (mod correct?) answer
bdw13 1 4096 128-bit security 768 20 N 4'44" Noise budget depleted
bdw13 1 4096 128-bit security 1024 20 N 4'43" Noise budget depleted
bdw13 1 8192 128-bit security 512 20 N 16'38" Bad (mod correct?) answer
bdw13 1 8192 128-bit security 1024 20 N 16'32" Bad (mod correct?) answer
bdw13 1 8192 128-bit security 1024 20 Y-16 20'43" Correct answer up to mod1024 (1149, 944, -2945, -325, 374, 569, 1087, 560, 123, 1169)
bdw13 1 8192 128-bit security 2048 20 Y-16 20'38" Correct answer up to mod ( 125, 2992, -2945, -2373, 374, 569, -1985, 1584, 1147, 2193)
bdw17 1 8192 128-bit security 2048 20 N 16'22" Correct answer up to mod (same as above)
bdw17 1 8192 128-bit security 4096 20 N 16'29" Correct answer up to mod (2173, 5040, -2945, 1723, -1674, 569, -1985, 5680, -4997, -5999)
bdw17 1 8192 128-bit security 4096 20 Y-16 20'35" Correct answer up to mod (same as above)
bdw13 1 8192 128-bit security 10000 20 Y-16 20'39" Correct answer
bdw13 1 8192 128-bit security 10000 20 N 17'53" Correct answer
  • Conclusions:
    • as far as I can tell, exactly one of two things happens:
        1. noise budget depleted error thrown
        1. answer is correct up to plain-mod.
    • Relinearizing with 16-bit key takes about 10-25% longer, though it seems with larger plain_mod, this is reduces. Presumably at some large enough plain-mod, the relinearization will be faster.
      • Need to check with smaller keys (faster, but consumes more noise)

TODOs

Infastructures

  • Check precision of ops across the range of inputs
  • Check encryption budget is satisfied when performing ops
  • (Yixing) Update to latest ngraph device registration API
  • (Fabian) Base code on ie backend (if it's stable)
  • (Fabian) Add plaintext support for ops
    • Dot, Multiply, Add, Subtract (one way)
    • Constant

Ops for Mnist

  • Get model weights + structure from Xin
    • (Xin) Make python script to create tf model
    • Export to ngraph serialized format
    • Import to ngraph
  • Add Dot op
  • Add Constant op

Error converting preexisting TF graph

Hi,

I am trying to convert the TF graph in my existing codebase to be HE compatible with nGRAPH-HE.

To convert it I replaced all non-HE compatible layers w/ HE compatible versions, and added import ngraph_bridge to the top of the file.

To run the program I use: NGRAPH_TF_BACKEND=HE_SEAL python make_basic_net.py

However, I get an error:

<personal program printout>
/
| Encryption parameters :
|   scheme: CKKS
|   poly_modulus_degree: 8192
|   coeff_modulus size: 180 (30 + 24 + 24 + 24 + 24 + 24 + 30) bits
\
[INFO] 2019-07-01T21:58:47z src/seal/he_seal_backend.cpp 84     Scale 1.67608e+07
[INFO] 2019-07-01T21:58:48z src/seal/he_seal_executable.cpp 462 Batching data with batch size 3
[INFO] 2019-07-01T21:58:48z src/seal/he_seal_executable.cpp 479 Processing server inputs
2019-07-01 14:58:48.016521: I /home/isi/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_utils.cc:252] Serializing graph to: tf_function_error_ngraph_cluster_45.json

Traceback (most recent call last):
  File "/home/isi/projects/VORGNet/.venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    return fn(*args)
  File "/home/isi/projects/VORGNet/.venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/isi/projects/VORGNet/.venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InternalError: Caught exception while executing nGraph computation: Check '(element_type == element::f32)' failed at /home/isi/he-transformer/src/seal/kernel/constant_seal.cpp:27:
Constant supports only f32 type


         [[{{node ngraph_cluster_45}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "make_basic_net.py", line 80, in <module>
    out = tfnet.sess.run(tfnet.out, {tfnet.inp: input})
  File "/home/isi/projects/VORGNet/.venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/isi/projects/VORGNet/.venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/isi/projects/VORGNet/.venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/home/isi/projects/VORGNet/.venv/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Caught exception while executing nGraph computation: Check '(element_type == element::f32)' failed at /home/isi/he-transformer/src/seal/kernel/constant_seal.cpp:27:
Constant supports only f32 type

At first I thought there was something wrong with my graph, but then I tried debugging by inputting data to the 1st layer, getting the output and feeding that output to the next layer, and then repeating until I went through the whole network. Lo and behold, every layer in isolation successfully worked. It is only when they are combined that it gives this error.


Other debugging attempts:

  1. I tried running with NGRAPH_ENCRYPT_DATA=1 (eg: GRAPH_ENCRYPT_DATA=1 NGRAPH_TF_BACKEND=HE_SEAL python make_basic_net.py). I have no idea what this flag does as I thought encrypting data is the default behavior? In either case, this time I got a different error (see below). Furthermore, I got the same error when I tried running the layers in isolation.
/
| Encryption parameters :
|   scheme: CKKS
|   poly_modulus_degree: 8192
|   coeff_modulus size: 180 (30 + 24 + 24 + 24 + 24 + 24 + 30) bits
\
[INFO] 2019-07-01T22:00:39z src/seal/he_seal_backend.cpp 84     Scale 1.67608e+07
[INFO] 2019-07-01T22:00:39z src/seal/he_seal_executable.cpp 459 Encrypting data
[INFO] 2019-07-01T22:00:39z src/seal/he_seal_executable.cpp 462 Batching data with batch size 3
[INFO] 2019-07-01T22:00:39z src/seal/he_seal_executable.cpp 479 Processing server inputs
[INFO] 2019-07-01T22:00:39z src/seal/he_seal_executable.cpp 504 Encrypting parameter 0
[INFO] 2019-07-01T22:00:39z src/seal/he_seal_executable.cpp 519 Done encrypting parameter
[INFO] 2019-07-01T22:00:39z src/seal/he_seal_executable.cpp 504 Encrypting parameter 0
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)
  1. I also tried building from a previous commit (6ae29aa - see my other issue), and got the exact same behaviour.

NGRAPH_TF_BACKEND: HE_SEAL is not supported

I run ax.py using NGRAPH_TF_BACKEND=HE_SEAL python ax.py. However i got

Traceback (most recent call last):
  File "ax.py", line 26, in <module>
    f_val = sess.run(f, feed_dict={b: np.ones((1, 2))})
  File "/home/xiaoxiaowan/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/xiaoxiaowan/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/xiaoxiaowan/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/home/xiaoxiaowan/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: NGRAPH_TF_BACKEND: HE_SEAL is not supported

Thanks

Add op performance

Since checking CPU ops, this has become much slower:

[INFO] 2018-05-07T23:24:01z he_call_frame.cpp 159	Op Add_31
[INFO] 2018-05-07T23:26:14z he_backend.cpp 324	Checking noise budget
[INFO] 2018-05-07T23:26:14z he_backend.cpp 334	Noise budget 412
[INFO] 2018-05-07T23:26:14z he_backend.cpp 343	Done checking noise budget

MNIST-Cryptonets example does not work w/ encrypted weights

I tried running the MNIST-Cryptonets example with encrypted weights and data and got an error. (Using unencrypted data as in the published example worked just fine).

NGRAPH_ENCRYPT_DATA=1 \
NGRAPH_HE_SEAL_CONFIG=../../test/model/he_seal_ckks_config_N13_L7.json \
NGRAPH_ENCRYPT_MODEL=1 \
NGRAPH_TF_BACKEND=HE_SEAL \ 
python test.py --batch_size=4096

And got the error readout:

(venv-tf-py3) rsandler@fennti-server:/home/isi/he-transformer/examples/MNIST-Cryptonets$ NGRAPH_ENCRYPT_DATA=1 NGRAPH_HE_SEAL_CONFIG=../../test/model/he_seal_ckks_config_N13_L7.json NGRAPH_ENCRYPT_MODEL=1 NGRAPH_TF_BACKEND=HE_SEAL python test.py --batch_size=4096
Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz
2019-07-22 13:57:40.815323: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2100000000 Hz
2019-07-22 13:57:40.817809: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x4a34390 executing computations on platform Host. Devices:
2019-07-22 13:57:40.817865: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
/
| Encryption parameters :
|   scheme: CKKS
|   poly_modulus_degree: 8192
|   coeff_modulus size: 180 (30 + 24 + 24 + 24 + 24 + 24 + 30) bits
\
[INFO] 2019-07-22T20:57:40z src/seal/he_seal_backend.cpp 86     Scale 1.67608e+07
[INFO] 2019-07-22T20:57:41z src/seal/he_seal_executable.cpp 474 Encrypting data
[INFO] 2019-07-22T20:57:41z src/seal/he_seal_executable.cpp 477 Batching data with batch size 4096
[INFO] 2019-07-22T20:57:41z src/seal/he_seal_executable.cpp 480 Encrypting model
[INFO] 2019-07-22T20:57:41z src/seal/he_seal_executable.cpp 494 Processing server inputs
[INFO] 2019-07-22T20:57:41z src/seal/he_seal_executable.cpp 519 Encrypting parameter 0
[INFO] 2019-07-22T20:57:41z src/seal/he_seal_executable.cpp 536 Done encrypting parameter
terminate called after throwing an instance of 'terminate called recursively
ngraph::CheckFailure'
terminate called recursively
terminate called recursively
Aborted (core dumped)

In fact, even calling test.py with NGRAPH_ENCRYPT_MODEL=0 gives the same error, as in:

NGRAPH_ENCRYPT_DATA=1 NGRAPH_HE_SEAL_CONFIG=../../test/model/he_seal_ckks_config_N13_L7.json NGRAPH_TF_BACKEND=HE_SEAL NGRAPH_ENCRYPT_MODEL=0 python test.py --batch_size=1

error while using client mode

I used
export LD_LIBRARY_PATH=/home/liao/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/python/ngraph_bridge:$LD_LIBRARY_PATH
while active virtualenv and it finally found libngraph.so
After that I tried to use client mode
client shows :
image
image

another terminal shows :
image

Getting a crash when running CryptoNets' test.py with NGRAPH_ENCRYPT_MODEL=1

Hi,

Trying to run the following command causes a crash -

NGRAPH_ENCRYPT_MODEL=1 [NGRAPH_TF_BACKEND=HE_SEAL_CKKS | NGRAPH_TF_BACKEND=HE_SEAL_BFV]  python test.py

(with or without NGRAPH_HE_SEAL_CONFIG)
However it doesn't happen when both NGRAPH_ENCRYPT_MODEL and NGRAPH_ENCRYPT_DATA are specified.
There is a crash as in the following log:

(venv-tf-py3) he-transformer/examples/cryptonets$ NGRAPH_ENCRYPT_MODEL=1 NGRAPH_TF_BACKEND=HE_SEAL_BFV python test.py
TensorFlow version installed: 1.12.0 (v1.12.0-0-ga6d8ffa)
nGraph bridge built with: 1.12.0 (v1.12.0-0-ga6d8ffa)
Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz
[INFO] 2019-01-20T13:36:01z src/seal/bfv/he_seal_bfv_backend.cpp 65 Using SEAL BFV default parameters
[INFO] 2019-01-20T13:36:01z src/seal/ckks/he_seal_ckks_backend.cpp 83 Using SEAL CKKS default parameters
[INFO] 2019-01-20T13:36:01z src/seal/ckks/he_seal_ckks_backend.cpp 87 Error config_path is NULL
[INFO] 2019-01-20T13:36:01z src/seal/ckks/he_seal_ckks_backend.cpp 88 Error using NGRAPH_HE_SEAL_CONFIG. Using default
[INFO] 2019-01-20T13:36:01z src/seal/he_seal_util.hpp 33
/ Encryption parameters:
| scheme: HE:SEAL:BFV
| poly_modulus: 4096
| coeff_modulus size: 109 bits
| plain_modulus: 1024
\ noise_standard_deviation: 3.2
[INFO] 2019-01-20T13:36:01z src/he_backend.cpp 258 Encrypting model
[INFO] 2019-01-20T13:36:01z src/he_backend.cpp 324 [ Parameter_0 ]
[INFO] 2019-01-20T13:36:01z src/he_backend.cpp 329 Parameter shape {1, 784}
[INFO] 2019-01-20T13:36:01z src/he_backend.cpp 324 [ Constant_17 ]
[INFO] 2019-01-20T13:36:01z src/he_backend.cpp 450 Inputs:
[INFO] 2019-01-20T13:36:01z src/he_backend.cpp 458 Outputs: Cipher
[INFO] 2019-01-20T13:36:01z src/he_backend.cpp 402 Constant_17 took 110ms
[INFO] 2019-01-20T13:36:01z src/he_backend.cpp 324 [ Constant_14 ]
[INFO] 2019-01-20T13:36:02z src/he_backend.cpp 450 Inputs:
[INFO] 2019-01-20T13:36:02z src/he_backend.cpp 458 Outputs: Cipher
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 402 Constant_14 took 9896ms
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 324 [ Constant_10 ]
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 450 Inputs:
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 458 Outputs: Cipher
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 402 Constant_10 took 0ms
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 324 [ Constant_3 ]
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 450 Inputs:
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 458 Outputs: Cipher
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 402 Constant_3 took 14ms
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 324 [ Reshape_2 ]
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 450 Inputs: Plain
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 458 Outputs: Plain
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 786 Reshape op
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 801 Done with reshape op
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 402 Reshape_2 took 0ms
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 324 [ Reshape_5 ]
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 450 Inputs: Cipher
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 458 Outputs: Cipher
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 786 Reshape op
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 801 Done with reshape op
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 402 Reshape_5 took 0ms
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 324 [ Reshape_4 ]
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 450 Inputs: Plain
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 458 Outputs: Plain
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 786 Reshape op
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 801 Done with reshape op
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 402 Reshape_4 took 0ms
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 324 [ Convolution_6 ]
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 450 Inputs: Plain, Cipher
[INFO] 2019-01-20T13:36:11z src/he_backend.cpp 458 Outputs: Cipher
terminate called recursively
terminate called recursively
terminate called recursively
terminate called recursively
Aborted (core dumped)

Fix relinearize

  • Unit test abc_plain_plain fails when relinearizing, due to relinearize op always being inserted, not just when the output is a ciphertext.

Cryptonets Timings

Mnist-Cryptonets

TensorFlow-nGraph++ nGraph++
80-bit (N=13) 99.99s 99.99s
256-bit (N=14) 99.99s 99.99s

HE_SEAL should be HE_SEAL_CKKS

Hi,

I tried running the basic example in examples/axpy.py using the line NGRAPH_TF_BACKEND=HE_SEAL python axpy.py as indicated in examples/README.md. However I got the error:

tensorflow.python.framework.errors_impl.InternalError: NGRAPH_TF_BACKEND: HE_SEAL is not supported

I could only get it to work by using NGRAPH_TF_BACKEND=HE_SEAL_CKKS python axpy.py. (Note HE_SEAL_CKKS)

Is there a typo here or am I doing something wrong?

Merge changes to nGraph.

Currently we’re using special branch in ngraph and ngraph-tf for some quick fixes. We shall merge them in or refactor the code so that we can use the standard version of the two repositories.

Don't pass type to ops

From SEAL examples 1, it seems that ops don't depend on the encoder type. So a lot of our ops can be simplified, by not caring about the type. Make sure this is true, then simplify those ops.

Demoing Encryption Capabilities

Hello,

I have successfully installed the library, including the python bindings. I've tested the MNIST example, and the ax.py example and I think I have a pretty fair understanding of how it all works (having read the nGraph-HE paper, the CKKS algorithm paper, and taken a look at the Microsoft SEAL Github account).

However, I was wondering if there were some way to view the encrypted data/models? I recognize that the data is being encrypted and then decrypted, but I can't seem to figure out how to view the encrypted data (or the keys, encryptors, and decryptors) without editing the c++ libraries on the backend in the src file area.

Is there a way to do this for demo purposes using python?

Thank you!

Ubuntu 18 stucks after long number compilings at tensorflow compiling.

When I run
cmake .. [-DCMAKE_CXX_COMPILER=g++-7 -DCMAKE_C_COMPILER=gcc-7]
and then
make install it stucked on tensorflow compiling.
image

And when I skip compiling of tensorflow with
cmake .. -DUSE_PREBUILT_BINARIES=ON [-DCMAKE_CXX_COMPILER=g++-7 -DCMAKE_C_COMPILER=gcc-7]
It gives error of tensorflow not being in there.

Kindly help me out here . Thank you!

Building on Ubuntu 18.04 using Python 3.6

I'm using Ubuntu 18.04 with gcc and g++ version 7.4.0. I see that in the Building HE Transformer section you have wrote that you only support Ubuntu 16.04, but what's the reason for that? When I tried to build for Ubuntu 18.04, it failed at this point.

[ 50%] Building CXX object src/CMakeFiles/he_seal_backend.dir/seal/he_seal_backend.cpp.o
[ 51%] Building CXX object src/CMakeFiles/he_seal_backend.dir/seal/he_seal_client.cpp.o
[ 52%] Linking CXX shared library libhe_seal_backend.so
[ 52%] Built target he_seal_backend
Scanning dependencies of target he_seal_backend_soft_link
CMake Error: failed to create symbolic link '/home/et/tools/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/ngraph_bridge/libhe_seal_backend.so': no such file or directory

I believe the reason is that I have Python 3.6 in my system. Is it possible to somehow configure it to make use of Python 3.6? Because now it will look for a directory python3.5, however there is no such a directory, instead there is python3.6 in my case.

error while using sudo make install python_wheel

Hi, I successfully done the step of make install, and I was trying the client mode using seal_client.
While I try to use command sudo make install python_wheel in "build" folder ,this error occurs.
image

Any help? Thanks so much.

Issue with 'make build_and_test_he_transformer' using Docker image

Hello,
I'm using the Docker method to build and test he-transformer.
I have downloaded the content of the he-transformer/docker/ directory, and run the two following command lines:
make build_docker_image
make build_and_test_he_transformer

The first one make build_docker_image completed successfully without any issues.

The second make build_and_test_he_transformer, on the other hand, completed the two first targets build and unit_tests successfully, and then failed on the third target python_integration. Below is the error trace I got.

[----------] Global test environment tear-down
[==========] 186 tests from 3 test cases ran. (45569 ms total)
[  PASSED  ] 186 tests.

  YOU HAVE 2 DISABLED TESTS

/home/he-transformer
Running python examples
python: can't open file 'axpy.py': [Errno 2] No such file or directory
Makefile:19: recipe for target 'build_and_test_he_transformer' failed
make: *** [build_and_test_he_transformer] Error 2

Here's also the code of the python_integration() function in build_and_test_he_transformer.sh

python_integration()
{
    echo 'Running python examples'
    cd examples
    python axpy.py
    NGRAPH_TF_BACKEND=HE_SEAL python axpy.py
    cd MNIST-Cryptonets
    NGRAPH_HE_SEAL_CONFIG=../../test/model/he_seal_ckks_config_N13_L7.json \
        NGRAPH_TF_BACKEND=HE_SEAL \
        NGRAPH_ENCRYPT_DATA=1 \
        python test.py --batch_size=128
}

The runtime trace above does not complain about cd examples, but then it cannot find the
axpy.py in that directory or anywhere in its path. Any idea how to fix this?

By the way, I have to run a shell on the he-transformer container (docker run -it he_transformer /bin/bash) to find the location of the axpy.py file inside the he-transformer image, but I could not find any code from he-transformer on the image. (See below)
Isn't the make build_docker_image command supposed to persist the he-transformer build (i.e., the executable binary and python files etc.) on the docker image? In other words, Am I not supposed to find those files when I log into the docker image?
Thank you for your help.

root@e59568a260f6:/home# ls -last
total 8
4 drwxr-xr-x 1 root root 4096 Jul 19 14:59 ..
4 drwxr-xr-x 2 root root 4096 Apr 12  2016 .
root@e59568a260f6:/home# ls -last /
total 164736
     0 drwxr-xr-x   5 root root       360 Jul 19 14:59 dev
     0 dr-xr-xr-x 351 root root         0 Jul 19 14:59 proc
     4 drwxr-xr-x   1 root root      4096 Jul 19 14:59 .
     4 drwxr-xr-x   1 root root      4096 Jul 19 14:59 ..
     0 -rwxr-xr-x   1 root root         0 Jul 19 14:59 .dockerenv
     4 drwxr-xr-x   1 root root      4096 Jul 19 14:59 etc
     0 dr-xr-xr-x  13 root root         0 Jul 18 18:02 sys
     4 drwx------   1 root root      4096 Jul 18 15:37 root
     4 drwxrwxrwt   1 root root      4096 Jul 18 15:36 tmp
     4 drwxr-xr-x   1 root root      4096 Jul 18 15:36 lib
     4 drwxr-xr-x   1 root root      4096 Jul 18 15:36 run
     4 drwxr-xr-x   1 root root      4096 Jul 18 15:35 sbin
     4 drwxr-xr-x   1 root root      4096 Jul 18 15:35 bin
     4 drwxr-xr-x   1 root root      4096 Jun 10 20:41 var
     4 drwxr-xr-x   2 root root      4096 Jun 10 20:41 lib64
     4 drwxr-xr-x   2 root root      4096 Jun 10 20:40 media
     4 drwxr-xr-x   2 root root      4096 Jun 10 20:40 mnt
     4 drwxr-xr-x   2 root root      4096 Jun 10 20:40 opt
     4 drwxr-xr-x   2 root root      4096 Jun 10 20:40 srv
     4 drwxr-xr-x   1 root root      4096 Jun 10 20:40 usr
164664 -rw-r--r--   1 root root 168610096 Dec 19  2018 bazel_0.21.0-linux-x86_64.deb
     4 drwxr-xr-x   2 root root      4096 Apr 12  2016 boot
     4 drwxr-xr-x   2 root root      4096 Apr 12  2016 home
root@e59568a260f6:/home# find / -name *he-transformer*
root@e59568a260f6:/home# 

Getting error for basic matrix multiplication example

I want to implement basic matrix multiplication in nGRAPH-HE.

Below is my code:

import ngraph_bridge
import numpy as np
import tensorflow as tf

tBf32 = tf.placeholder(np.float32)

def matmul(a, b, c = None):

    # Code will only run in 'encrypted mode' if a constant bias is used
    if c is None:
        z = tf.constant(np.zeros(shape=(a.shape[0], b.shape[1])), dtype=np.float32)
    else:
        z = tf.constant(c, dtype=np.float32)
		
    za = tf.constant(a, dtype=np.float32)
    with tf.Session() as sess:
        return sess.run(z + tf.matmul(za, tBf32), {tBf32: b})


if __name__ == '__main__':
    for _ in range(10):
        a = np.random.randint(low = -8, high = 8, size=(20,20))
        b = np.random.randint(low = -1, high = 1, size=(20,20))
        c = np.random.uniform(size=(20,20)) * 2
        d = np.abs(matmul(a, b, c) - (np.matmul(a, b) + c))
        print('Maximum error is {}'.format(np.max(d)))

The above code is run via: NGRAPH_ENCRYPT_DATA=1 NGRAPH_TF_BACKEND=HE_SEAL python tm.py

I am running into 2 issues:

  1. An offset needs to be provided for the multiplication to work in encrypted mode (e.g. return sess.run(tf.matmul(za, tBf32), {tBf32: b}) will nt perform encryption. In fact, the bias specifically needs to be before the matmul, so this will not work either: return sess.run(tf.matmul(za, tBf32) + z, {tBf32: b})
  2. This program worked in a previous commit (6ae29aa); however, in the latest release I get a error when running it:
2019-07-01 12:20:06.131850: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2100000000 Hz
2019-07-01 12:20:06.134584: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x3936000 executing computations on platform Host. Devices:
2019-07-01 12:20:06.134640: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
[INFO] 2019-07-01T19:20:06z src/seal/he_seal_encryption_parameters.hpp 110      Using default SEAL CKKS parameters
[WARN] 2019-07-01T19:20:06z src/seal/he_seal_backend.cpp 57     Parameter selection does not enforce minimum security level
/
| Encryption parameters :
|   scheme: CKKS
|   poly_modulus_degree: 1024
|   coeff_modulus size: 150 (30 + 30 + 30 + 30 + 30) bits
\
[INFO] 2019-07-01T19:20:06z src/seal/he_seal_backend.cpp 84     Scale 1.0737e+09
[INFO] 2019-07-01T19:20:06z src/seal/he_seal_executable.cpp 462 Batching data with batch size 20
[INFO] 2019-07-01T19:20:06z src/seal/he_seal_executable.cpp 479 Processing server inputs
*** Error in `python3': free(): invalid next size (fast): 0x00007f486c000ba0 ***
*** Error in `python3': free(): invalid next size (fast): 0x00007f4854000ba0 ***
*** Error in `python3': free(): invalid next size (fast): 0x00007f4814000ba0 ***
*** Error in `python3': free(): invalid next size (fast): 0x00007f4894000ba0 ***
Aborted (core dumped)

Heaan vs Seal

We test a heavily-quantized (-2,-1,0,1,2) one-hidden layer (100 hidden units) mnist neural network.

Backend log2(poly mod) plain mod Runtime (s) Result
SEAL 11 100000 Out of noise
SEAL 12 100000 12.2 Correct
HEAAN 8 2^200 4.1 Correct
HEAAN 7 2^200 2.4 Correct
HEAAN 7 2^300 2.5 Correct
HEAAN 6 2^200 1.5 Correct
HEAAN 5 2^200 ? Nothing really happens, doesn't even initialize?

It seems SEAL needs a smaller poly mod, but larger plain mod than HEAAN. The runtime grows slightly sublinearly in the poly mod, and is almost constant for the plain mod. So, HEAAN performs better due to the smaller poly mod needed.

We need to investigate the security of the HEAAN sceheme.

How do I test if the training is safety ?

Hi, after successfully run the test code, how to actually test for the safety while processing the training.
For example, if I use encrypted mode for both model and data, is there any way to prove the whole process is hard to be attacked or hacked?
Can NIST cybersecurity framework be used here?
Thanks

TODOs

  • Standardize type -> element_type
  • Re-enable SEAL pooled ops
  • Add more ops (Product, Reverse, ReverseSlice, etc.)
  • Check security of HEAAN
    • See page 17 of paper: parameters should satisfy N ≥ ((λ+110) / 7.2) * log(P * q_L)
    • Either increase poly-mod or decrease plain-mod; current security is too small
  • Add batching to plaintext ops?
    • Should be possible in theory; HEAAN makes it difficult; no real use case
  • Generalize unit-tests to test float/int/double?
  • Debug/Release version
    • Only Debug version should build SEAL with debug
    • Only Debug version should store Ciphertexts
  • Use reScaleByAndEqual after Secret Key and dot ops in HEAAN?
  • Update README, especially for example instructions
  • Optimize ops for HE-friendliness -- This should take place in kernel/op.cpp, not in kernel/{seal,heaan}/op.cpp, to reduce burden of implementing new HE backend schemes.
    • Optimize heaan multiply(cipher, plaintext) where plaintext == 0, 1, -1.
    • Optimize heaan/seal add(cipher, plaintext) where plaintext == 0.
    • Optimize heaan/seal subtract(cipher, plaintext) where plaintext == 0.
    • Optimize multiply(x,x) to use square(x)
  • SDL things
  • Generalize ops. Aside from multiply, add, negate, square, they should all come for free, i.e. not have any dependencies on SEAL or HEAAN.

GEMM optimizations

Verify addition and multiplication optimizations work on HEAAN and SEAL as expected

Python CryptoNets runs systematically faster than C++ CryptoNets

Python CryptoNets through runs systematically faster than C++ CryptoNets if the C++ code is in a stand-alone program or in gtest. The trick is to embed the C++ CryptoNets benchmark code inside ngrpah-tf bridge. See: https://github.com/NervanaSystems/ngraph-tf-he/blob/he-benchmark/src/ngraph_encapsulate_op.cc#L457 . The run_he_benchmark() C++ function is then triggered by arbitrary python computation through TF. Potentially this is due to linking with some suboptimal libraries, needs to be investigated.

HEAAN Timing

CryptoNets

  • Timing through python batch 1
  • Timing through C++ batch 1
  • Enable batch through C++
  • Timing for different batch sizes

Mnist MLP

Others

  • Request code center scans
  • Compile with protection

Core dumped

When i run the command of MNIST-Cryptonets:
NGRAPH_ENCRYPT_DATA=1 NGRAPH_BATCH_DATA=1 NGRAPH_HE_SEAL_CONFIG=../../test/model/he_seal_ckks_config_N13_L7.json NGRAPH_TF_BACKEND=HE_SEAL_CKKS python test.py --batch_size=4096
i encounter this error:
image
What's wrong?

Issue with Average Pooling?

Hi & Thanks for this great library!

I've been playing around with it for a couple of days, but I faced some issues when trying to integrate average pooling and having the encryption flags set.

In your MNIST-Cryptonets example the suggested test via

NGRAPH_ENCRYPT_DATA=1 \
NGRAPH_HE_SEAL_CONFIG=../../test/model/he_seal_ckks_config_N13_L7.json \
NGRAPH_TF_BACKEND=HE_SEAL \
python test.py --batch_size=4096

utilizes a subset of the network trained by calling train.py. This subset discards the average pooling layer. I updated the code in common.py and use it now for training as well as testing. It works when I don't include an average pooling layer, and it works when nothing is encrypted. As soon as I utilize the above encryption configuration, it crashes with the appended message.

I'm running it on a system with 4 CPU's and 26 GB of memory, which I hope is sufficient for the purpose, especially when testing on just 1 image.

How could this be tackled? Thanks in advance!


2019-07-05 10:25:47.168665: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300000000 Hz
2019-07-05 10:25:47.170316: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x4cf6860 executing computations on platform Host. Devices:
2019-07-05 10:25:47.170370: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
/
| Encryption parameters :
|   scheme: CKKS
|   poly_modulus_degree: 8192
|   coeff_modulus size: 180 (30 + 24 + 24 + 24 + 24 + 24 + 30) bits
\
[INFO] 2019-07-05T10:25:47z src/seal/he_seal_backend.cpp 84	Scale 1.67608e+07
[INFO] 2019-07-05T10:25:47z src/seal/he_seal_executable.cpp 142	Setting parameters and results
[INFO] 2019-07-05T10:25:47z src/seal/he_seal_executable.cpp 144	Parameters size 1
[INFO] 2019-07-05T10:25:47z src/seal/he_seal_executable.cpp 476	Encrypting data
[INFO] 2019-07-05T10:25:47z src/seal/he_seal_executable.cpp 479	Batching data with batch size 1
[INFO] 2019-07-05T10:25:47z src/seal/he_seal_executable.cpp 496	Processing server inputs
[INFO] 2019-07-05T10:25:47z src/seal/he_seal_executable.cpp 521	Encrypting parameter 0
[INFO] 2019-07-05T10:25:50z src/seal/he_seal_executable.cpp 538	Done encrypting parameter
2019-07-05 10:26:01.741378: I /home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/src/ngraph_utils.cc:252] Serializing graph to: tf_function_error_ngraph_cluster_0.json

Traceback (most recent call last):
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    return fn(*args)
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InternalError: Caught exception while executing nGraph computation: scale mismatch

	 [[{{node ngraph_cluster_0}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "test2.py", line 144, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "test2.py", line 121, in main
    test_mnist_cnn(FLAGS)
  File "test2.py", line 80, in test_mnist_cnn
    y_conv_val = y_conv.eval(feed_dict={x: x_test, y_: y_test})
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 695, in eval
    return _eval_using_default_session(self, feed_dict, self.graph, session)
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 5181, in _eval_using_default_session
    return session.run(tensors, feed_dict)
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/home/ron/he-transformer_new/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Caught exception while executing nGraph computation: scale mismatch

	 [[{{node ngraph_cluster_0}}]]

Error running make -j4 install

Again another problem is here about mkl it automatically tries to find mkl from a repository but cant. I have installed mkl by manual but still giving errors. Kindly help me here Thank you
Screenshot from 2019-03-15 06-12-13

Better HE abstraction.

Currently HEAAN and SEAL is mixed in the code. We could consider moving them into a top-level heaan and seal folder inside the src directory, while only leaving the HE-independent code outside.

Error when test ax.py

Test ax.py, and it stop, not wait for a client to connect.

(venv-tf-py3) allensll@allensll:~/nGraph-HE/he-transformer/examples$ NGRAPH_ENABLE_CLIENT=1 NGRAPH_ENCRYPT_DATA=1 NGRAPH_TF_BACKEND=HE_SEAL_CKKS python ax.py
2019-04-13 21:34:33.838920: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3192000000 Hz
2019-04-13 21:34:33.839354: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55564b6eaa80 executing computations on platform Host. Devices:
2019-04-13 21:34:33.839399: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
Result:  [[ 2.  6.]
 [12. 20.]]
(venv-tf-py3) allensll@allensll:~/nGraph-HE/he-transformer/examples$ 

Thanks.

he builds before gmp

He depends on ext_heaan, which depends on ext_ntl, which depends on ext_gmp, so this shouldn't be the case.

[ 24%] Built target he
gmp-6.1.2.tar.xz                   100%[==============================================================>]   1.86M  1.87MB/s    in 1.0s

2018-04-13 15:48:01 (1.87 MB/s) - 'gmp-6.1.2.tar.xz' saved [1946336/1946336]

[ 27%] No update step for 'ext_gmp'
[ 31%] No patch step for 'ext_gmp'
[ 34%] Performing configure step for 'ext_gmp'
checking build system type... skylake-pc-linux-gnu
checking host system type... skylake-pc-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p```

ngraph::CheckFailure error when loading keras model with tensorflow backend

I trained a keras model with tensorflow backend. Everything works well without importing ngraph_bridge package.

But when I imported ngraph-bridge package and load the keras model. I got the following errors. I built the he-transformer-0.5.0 on ubuntu 18.04 with python 3.6.

I just use the load_model function to load the trained model.

model_name = 'models/checkpoint_NoneActivation_1k.01-0.8113.h5'
autoencoder = load_model(model_name)

autoencoder.summary()

(venv-tf-py3) jchen67@ubuntu:~/Downloads/he-transformer-0.5.0/examples/MyProject$ NGRAPH_ENCRYPT_DATA=1 NGRAPH_HE_SEAL_CONFIG=../../test/model/he_seal_ckks_config_N13_L7.json NGRAPH_TF_BACKEND=HE_SEAL python load_keras_model.py
WARNING:tensorflow:From /home/jchen67/Downloads/he-transformer-0.5.0/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-07-17 13:38:06.146089: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3490765000 Hz
2019-07-17 13:38:06.146441: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x406edd0 executing computations on platform Host. Devices:
2019-07-17 13:38:06.146488: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
/
| Encryption parameters :
| scheme: CKKS
| poly_modulus_degree: 8192
| coeff_modulus size: 180 (30 + 24 + 24 + 24 + 24 + 24 + 30) bits

[INFO] 2019-07-17T17:38:06z src/seal/he_seal_backend.cpp 84 Scale 1.67608e+07
[INFO] 2019-07-17T17:38:06z src/seal/he_seal_executable.cpp 459 Encrypting data
[INFO] 2019-07-17T17:38:06z src/seal/he_seal_executable.cpp 462 Batching data with batch size 1
[INFO] 2019-07-17T17:38:06z src/seal/he_seal_executable.cpp 479 Processing server inputs
[INFO] 2019-07-17T17:38:06z src/seal/he_seal_executable.cpp 504 Encrypting parameter 0
[INFO] 2019-07-17T17:38:08z src/seal/he_seal_executable.cpp 519 Done encrypting parameter
[INFO] 2019-07-17T17:38:08z src/seal/he_seal_executable.cpp 504 Encrypting parameter 0
[INFO] 2019-07-17T17:38:10z src/seal/he_seal_executable.cpp 519 Done encrypting parameter
[INFO] 2019-07-17T17:38:14z src/seal/he_seal_executable.cpp 663 Total time 4120 (ms)
[INFO] 2019-07-17T17:38:14z src/seal/he_seal_executable.cpp 459 Encrypting data
[INFO] 2019-07-17T17:38:14z src/seal/he_seal_executable.cpp 462 Batching data with batch size 1
[INFO] 2019-07-17T17:38:14z src/seal/he_seal_executable.cpp 479 Processing server inputs
[INFO] 2019-07-17T17:38:14z src/seal/he_seal_executable.cpp 504 Encrypting parameter 0
terminate called after throwing an instance of 'ngraph::CheckFailure'
what(): Check '(values.size() >= m_batch_size)' failed at /home/jchen67/Downloads/he-transformer-0.5.0/src/he_plain_tensor.cpp:105:
values size 0 is smaller than batch size 1

terminate called recursively
terminate called recursively
Aborted (core dumped)

Do you guys have any ideas? I will appreciate if you give me any suggestions.

Make install fails

`root@homom:~/he-transformer/build# make install
[ 8%] Built target ext_seal
[ 9%] Performing build step for 'ext_ngraph_tf'
ARTIFACTS location: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts
Running virtualenv with interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/bin/python3
Not overwriting existing python script /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/bin/python (you must use /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/bin/python3)
Installing setuptools, pip, wheel...
done.
Loading virtual environment from: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3
Loading virtual environment from: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3
PIP location
/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/bin/pip
Requirement already up-to-date: pip in ./venv-tf-py3/lib/python3.5/site-packages (19.1.1)
Requirement already up-to-date: setuptools in ./venv-tf-py3/lib/python3.5/site-packages (41.0.1)
Requirement already up-to-date: psutil in ./venv-tf-py3/lib/python3.5/site-packages (5.6.3)
Requirement already up-to-date: six>=1.10.0 in ./venv-tf-py3/lib/python3.5/site-packages (1.12.0)
Requirement already up-to-date: numpy>=1.13.3 in ./venv-tf-py3/lib/python3.5/site-packages (1.16.4)
Requirement already up-to-date: absl-py>=0.1.6 in ./venv-tf-py3/lib/python3.5/site-packages (0.7.1)
Requirement already up-to-date: astor>=0.6.0 in ./venv-tf-py3/lib/python3.5/site-packages (0.8.0)
Requirement already up-to-date: google_pasta>=0.1.1 in ./venv-tf-py3/lib/python3.5/site-packages (0.1.7)
Requirement already up-to-date: wheel>=0.26 in ./venv-tf-py3/lib/python3.5/site-packages (0.33.4)
Requirement already up-to-date: mock in ./venv-tf-py3/lib/python3.5/site-packages (3.0.5)
Requirement already up-to-date: termcolor>=1.1.0 in ./venv-tf-py3/lib/python3.5/site-packages (1.1.0)
Requirement already up-to-date: protobuf>=3.6.1 in ./venv-tf-py3/lib/python3.5/site-packages (3.8.0)
Requirement already up-to-date: keras_applications>=1.0.6 in ./venv-tf-py3/lib/python3.5/site-packages (1.0.8)
Requirement already up-to-date: keras_preprocessing==1.0.5 in ./venv-tf-py3/lib/python3.5/site-packages (1.0.5)
Requirement already up-to-date: yapf==0.26.0 in ./venv-tf-py3/lib/python3.5/site-packages (0.26.0)
Package Version


absl-py 0.7.1
astor 0.8.0
gast 0.2.2
google-pasta 0.1.7
grpcio 1.22.0
h5py 2.9.0
Keras-Applications 1.0.8
Keras-Preprocessing 1.0.5
Markdown 3.1.1
mock 3.0.5
numpy 1.16.4
pip 19.1.1
protobuf 3.8.0
psutil 5.6.3
setuptools 41.0.1
six 1.12.0
tensorboard 1.13.1
tensorflow 1.13.1
tensorflow-estimator 1.13.0
termcolor 1.1.0
Werkzeug 0.15.4
wheel 0.33.4
yapf 0.26.0
Target Arch: native
Building TensorFlow
fatal: destination path 'tensorflow' already exists and is not an empty directory.
remote: Enumerating objects: 69, done.
remote: Counting objects: 100% (58/58), done.
remote: Compressing objects: 100% (24/24), done.
remote: Total 31 (delta 26), reused 12 (delta 7), pack-reused 0
Unpacking objects: 100% (31/31), done.
From https://github.com/tensorflow/tensorflow
377a4df..e96bb86 master -> origin/master
HEAD is now at 6612da8... Merge pull request #26101 from gunan/r1.13
PYTHON_BIN_PATH: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/bin/python
SOURCE DIR: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow
ARTIFACTS DIR: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts/tensorflow
WARNING: Running Bazel server needs to be killed, because the startup options are different.
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
INFO: Invocation ID: 75ecb08a-8fda-4dc9-ad4f-04c87b50510c
You have bazel 0.21.0 installed.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=gdr # Build with GDR support.
--config=verbs # Build with libverbs support.
--config=ngraph # Build with Intel nGraph support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=noignite # Disable Apacha Ignite support.
--config=nokafka # Disable Apache Kafka support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished
Starting local Bazel server and connecting to it...
INFO: Invocation ID: a9c4bbe1-78c6-48ab-aca1-99f6a3d2596f
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/python/BUILD:2986:1: in py_library rule //tensorflow/python:standard_ops: target '//tensorflow/python:standard_ops' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of tf.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/python/BUILD:77:1: in py_library rule //tensorflow/python:no_contrib: target '//tensorflow/python:no_contrib' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of tf.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/gan/BUILD:136:1: in py_library rule //tensorflow/contrib/gan:losses_impl: target '//tensorflow/contrib/gan:losses_impl' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of tf.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/metrics/BUILD:16:1: in py_library rule //tensorflow/contrib/metrics:metrics_py: target '//tensorflow/contrib/metrics:metrics_py' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of tf.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/learn/BUILD:17:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': No longer supported. Switch to SavedModel immediately.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/learn/BUILD:17:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': No longer supported. Switch to SavedModel immediately.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/bayesflow/BUILD:17:1: in py_library rule //tensorflow/contrib/bayesflow:bayesflow_py: target '//tensorflow/contrib/bayesflow:bayesflow_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.contrib.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/seq2seq/BUILD:23:1: in py_library rule //tensorflow/contrib/seq2seq:seq2seq_py: target '//tensorflow/contrib/seq2seq:seq2seq_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.contrib.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/seq2seq/BUILD:23:1: in py_library rule //tensorflow/contrib/seq2seq:seq2seq_py: target '//tensorflow/contrib/seq2seq:seq2seq_py' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of tf.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/timeseries/python/timeseries/state_space_models/BUILD:76:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries/state_space_models:kalman_filter: target '//tensorflow/contrib/timeseries/python/timeseries/state_space_models:kalman_filter' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.contrib.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/timeseries/python/timeseries/BUILD:356:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries:ar_model: target '//tensorflow/contrib/timeseries/python/timeseries:ar_model' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.contrib.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/timeseries/python/timeseries/state_space_models/BUILD:233:1: in py_library rule //tensorflow/contrib/timeseries/python/timeseries/state_space_models:filtering_postprocessor: target '//tensorflow/contrib/timeseries/python/timeseries/state_space_models:filtering_postprocessor' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.contrib.distributions to tfp.distributions.
WARNING: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/contrib/BUILD:13:1: in py_library rule //tensorflow/contrib:contrib_py: target '//tensorflow/contrib:contrib_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.contrib.distributions to tfp.distributions.
INFO: Analysed target //tensorflow/tools/pip_package:build_pip_package (356 packages loaded, 22397 targets configured).
INFO: Found 1 target...
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 48.855s, Critical Path: 2.05s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
Mon Jul 8 17:51:39 UTC 2019 : === Preparing sources in dir: /tmp/tmp.RhfNxZTw8A
~/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow ~/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow
~/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow
Mon Jul 8 17:51:54 UTC 2019 : === Building wheel
warning: no files found matching '.pyd' under directory ''
warning: no files found matching '.pd' under directory ''
warning: no files found matching '.dll' under directory ''
warning: no files found matching '.lib' under directory ''
warning: no files found matching '.h' under directory 'tensorflow/include/tensorflow'
warning: no files found matching '
' under directory 'tensorflow/include/Eigen'
warning: no files found matching '.h' under directory 'tensorflow/include/google'
warning: no files found matching '
' under directory 'tensorflow/include/third_party'
warning: no files found matching '*' under directory 'tensorflow/include/unsupported'
Mon Jul 8 17:52:16 UTC 2019 : === Output wheel file is in: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts/tensorflow
TF Wheel: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts/tensorflow/tensorflow-1.13.1-cp35-cp35m-linux_x86_64.whl
INFO: Invocation ID: e2d7c366-5e10-4d11-8896-2f7e976e6f2e
INFO: Analysed target //tensorflow:libtensorflow_cc.so (1 packages loaded, 8 targets configured).
INFO: Found 1 target...
Target //tensorflow:libtensorflow_cc.so up-to-date:
bazel-bin/tensorflow/libtensorflow_cc.so
INFO: Elapsed time: 3.347s, Critical Path: 2.15s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
Copying bazel-bin/tensorflow/libtensorflow_cc.so to /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts/tensorflow
Copying bazel-bin/tensorflow/libtensorflow_framework.so to /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts/tensorflow
Loading virtual environment from: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3
Processing ./tensorflow-1.13.1-cp35-cp35m-linux_x86_64.whl
Requirement already satisfied, skipping upgrade: astor>=0.6.0 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (0.8.0)
Requirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (1.0.5)
Requirement already satisfied, skipping upgrade: numpy>=1.13.3 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (1.16.4)
Requirement already satisfied, skipping upgrade: termcolor>=1.1.0 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (1.1.0)
Requirement already satisfied, skipping upgrade: grpcio>=1.8.6 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (1.22.0)
Requirement already satisfied, skipping upgrade: absl-py>=0.1.6 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (0.7.1)
Requirement already satisfied, skipping upgrade: tensorflow-estimator<1.14.0rc0,>=1.13.0 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (1.13.0)
Requirement already satisfied, skipping upgrade: gast>=0.2.0 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (0.2.2)
Requirement already satisfied, skipping upgrade: protobuf>=3.6.1 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (3.8.0)
Requirement already satisfied, skipping upgrade: six>=1.10.0 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (1.12.0)
Requirement already satisfied, skipping upgrade: tensorboard<1.14.0,>=1.13.0 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (1.13.1)
Requirement already satisfied, skipping upgrade: wheel>=0.26 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (0.33.4)
Requirement already satisfied, skipping upgrade: keras-applications>=1.0.6 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow==1.13.1) (1.0.8)
Requirement already satisfied, skipping upgrade: mock>=2.0.0 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorflow-estimator<1.14.0rc0,>=1.13.0->tensorflow==1.13.1) (3.0.5)
Requirement already satisfied, skipping upgrade: setuptools in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from protobuf>=3.6.1->tensorflow==1.13.1) (41.0.1)
Requirement already satisfied, skipping upgrade: markdown>=2.6.8 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow==1.13.1) (3.1.1)
Requirement already satisfied, skipping upgrade: werkzeug>=0.11.15 in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow==1.13.1) (0.15.4)
Requirement already satisfied, skipping upgrade: h5py in /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages (from keras-applications>=1.0.6->tensorflow==1.13.1) (2.9.0)
Installing collected packages: tensorflow
Found existing installation: tensorflow 1.13.1
Uninstalling tensorflow-1.13.1:
Successfully uninstalled tensorflow-1.13.1
Successfully installed tensorflow-1.13.1
LIB: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.5/site-packages/tensorflow
CXX_ABI: 1
nGraph Version: v0.19.1
fatal: destination path 'ngraph' already exists and is not an empty directory.
remote: Enumerating objects: 250, done.
remote: Counting objects: 100% (250/250), done.
remote: Compressing objects: 100% (43/43), done.
remote: Total 264 (delta 219), reused 221 (delta 207), pack-reused 14
Receiving objects: 100% (264/264), 125.33 KiB | 0 bytes/s, done.
Resolving deltas: 100% (220/220), completed with 94 local objects.
From https://github.com/NervanaSystems/ngraph
0ad2a3d..341205c ayzhuang/batch_norm_infer_relu_fusion -> origin/ayzhuang/batch_norm_infer_relu_fusion
4e3f03b..a601fac bob/static_backend_init -> origin/bob/static_backend_init
6a9c4bc..595a7ee leona/doc_v0.23-doc -> origin/leona/doc_v0.23-doc
f7b343d..127a4fd nishant_quantized_dot_core -> origin/nishant_quantized_dot_core
a18ad79..5cfe107 rearhart/plaidml -> origin/rearhart/plaidml
3791924..d625471 rearhart/plaidml-rc -> origin/rearhart/plaidml-rc

  • aa010ab...ec8c4b2 silee2/gelu -> origin/silee2/gelu (forced update)
    HEAD is now at 1cbad22... Add environment variable to explicitly enable f->is_dynamic() (#2973)
    Source location: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph
    Running COMMAND: cmake -DNGRAPH_INSTALL_PREFIX=/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts -DNGRAPH_USE_CXX_ABI=1 -DNGRAPH_DEX_ONLY=TRUE -DNGRAPH_DEBUG_ENABLE=NO -DNGRAPH_TARGET_ARCH=native -DNGRAPH_TUNE_ARCH=native -DNGRAPH_DISTRIBUTED_ENABLE=OFF -DNGRAPH_TOOLS_ENABLE=YES -DNGRAPH_GPU_ENABLE=NO -DNGRAPH_PLAIDML_ENABLE=NO -DNGRAPH_INTELGPU_ENABLE=NO -DNGRAPH_UNIT_TEST_ENABLE=YES /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph
    -- NGRAPH_VERSION 0.19.1+1cbad22
    -- NGRAPH_VERSION_SHORT 0.19.1
    -- NGRAPH_WHEEL_VERSION 0.19.1
    -- NGRAPH_API_VERSION 0.19
    -- NGRAPH_FORWARD_CMAKE_ARGS -DCMAKE_C_COMPILER=/usr/bin/cc;-DCMAKE_CXX_COMPILER=/usr/bin/c++;-DCMAKE_BUILD_TYPE=Release
    -- NGRAPH_UNIT_TEST_ENABLE: ON
    -- NGRAPH_TOOLS_ENABLE: ON
    -- NGRAPH_CPU_ENABLE: ON
    -- NGRAPH_INTELGPU_ENABLE: OFF
    -- NGRAPH_GPU_ENABLE: OFF
    -- NGRAPH_INTERPRETER_ENABLE: ON
    -- NGRAPH_NOP_ENABLE: ON
    -- NGRAPH_GPUH_ENABLE: OFF
    -- NGRAPH_GENERIC_CPU_ENABLE: OFF
    -- NGRAPH_DEBUG_ENABLE: OFF
    -- NGRAPH_DEPRECATED_ENABLE: OFF
    -- NGRAPH_ONNX_IMPORT_ENABLE: OFF
    -- NGRAPH_DEX_ONLY: ON
    -- NGRAPH_ENABLE_CPU_CONV_AUTO: ON
    -- NGRAPH_CODE_COVERAGE_ENABLE: OFF
    -- NGRAPH_LIB_VERSIONING_ENABLE: OFF
    -- NGRAPH_PYTHON_BUILD_ENABLE: OFF
    -- NGRAPH_USE_PREBUILT_LLVM: OFF
    -- NGRAPH_PLAIDML_ENABLE: OFF
    -- NGRAPH_DISTRIBUTED_ENABLE: OFF
    -- NGRAPH_JSON_ENABLE: ON
    -- Installation directory: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts
    -- nGraph using CXX11 ABI: 1
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb
    make[3]: Entering directory '/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb'
    make[4]: Entering directory '/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb'
    make[5]: Entering directory '/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb'
    make[5]: Leaving directory '/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb'
    [100%] Built target ext_tbb
    make[4]: Leaving directory '/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb'
    make[3]: Leaving directory '/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb'
    -- Building Intel TBB: /usr/bin/make -j4 compiler=gcc tbb_build_dir=/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb_build tbb_build_prefix=tbb
    CMake Warning (dev) at cmake/external_tbb.cmake:36 (find_package):
    Policy CMP0074 is not set: find_package uses _ROOT variables.
    Run "cmake --help-policy CMP0074" for policy details. Use the cmake_policy
    command to set the policy and suppress this warning.

CMake variable TBB_ROOT is set to:

/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/tbb/tbb-src

For compatibility, CMake is ignoring the variable.
Call Stack (most recent call first):
CMakeLists.txt:499 (include)
This warning is for project developers. Use -Wno-dev to suppress it.

-- Found TBB and imported target TBB::tbb
-- TBB so version: 2
-- Compile Flags: -DEIGEN_MPL2_ONLY -DTBB_USE_THREADING_TOOLS -std=c++11 -O2 -fPIC -Wformat -Wformat-security -D_FORTIFY_SOURCE=2 -fstack-protector-strong -D_GLIBCXX_USE_CXX11_ABI=1 -march=native -mtune=native -DNGRAPH_CPU_ENABLE
-- Shared Link Flags: -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now
-- CMAKE_CXX_FLAGS_RELEASE -O3 -DNDEBUG
-- CMAKE_CXX_FLAGS_DEBUG -O0 -g
-- Found OpenMP_C: -fopenmp
-- Found OpenMP_CXX: -fopenmp
-- Found OpenMP: TRUE
-- PlaidML not enabled; not compiling ngraph-to-plaidml
-- tools enabled
-- Adding unit test for backend INTERPRETER
-- Adding unit test for backend CPU
-- unit tests enabled
-- Configuring done
-- Generating done
-- Build files have been written to: /root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake
Running COMMAND: make -j4
[ 2%] Built target ext_gtest
[ 4%] Built target ext_json
[ 6%] Built target ext_eigen
[ 6%] Performing download step (download, verify and extract) for 'ext_mkl'
-- verifying file...
file='/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/mkl/src/mklml_lnx_2019.0.3.20190220.tgz'
-- SHA1 hash of
/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/mkl/src/mklml_lnx_2019.0.3.20190220.tgz
does not match expected value
expected: 'b536cd3929ab9ff26a9adc903c92d006d142107b'
actual: 'da39a3ee5e6b4b0d3255bfef95601890afd80709'
-- File already exists but hash mismatch. Removing...
-- Downloading...
dst='/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ngraph/build_cmake/mkl/src/mklml_lnx_2019.0.3.20190220.tgz'
timeout='none'
-- Using src='https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz'
-- Retrying...
-- Using src='https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz'
-- Retry after 5 seconds (attempt #2) ...
[ 50%] Built target ngraph
Scanning dependencies of target interpreter_backend
Scanning dependencies of target reserialize
Scanning dependencies of target nop_backend
[ 50%] Building CXX object src/tools/reserialize/CMakeFiles/reserialize.dir/reserialize.cpp.o
[ 50%] Building CXX object src/ngraph/runtime/nop/CMakeFiles/nop_backend.dir/nop_backend.cpp.o
[ 50%] Building CXX object src/ngraph/runtime/interpreter/CMakeFiles/interpreter_backend.dir/int_backend.cpp.o
[ 50%] Linking CXX executable reserialize
[ 50%] Built target reserialize
Scanning dependencies of target ngraph_test_util
[ 50%] Building CXX object test/util/CMakeFiles/ngraph_test_util.dir/autodiff/backprop_function.cpp.o
[ 50%] Linking CXX shared library ../../libnop_backend.so
[ 50%] Building CXX object src/ngraph/runtime/interpreter/CMakeFiles/interpreter_backend.dir/node_wrapper.cpp.o
[ 50%] Built target nop_backend
[ 50%] Building CXX object src/ngraph/runtime/interpreter/CMakeFiles/interpreter_backend.dir/int_executable.cpp.o
[ 50%] Building CXX object test/util/CMakeFiles/ngraph_test_util.dir/all_close_f.cpp.o
[ 51%] Building CXX object test/util/CMakeFiles/ngraph_test_util.dir/float_util.cpp.o
-- Using src='https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz'
-- Retry after 5 seconds (attempt #3) ...
[ 51%] Building CXX object test/util/CMakeFiles/ngraph_test_util.dir/test_tools.cpp.o
[ 51%] Building CXX object test/util/CMakeFiles/ngraph_test_util.dir/test_control.cpp.o
[ 51%] Building CXX object test/util/CMakeFiles/ngraph_test_util.dir/test_case.cpp.o
-- Using src='https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz'
-- Retry after 15 seconds (attempt #4) ...
[ 52%] Linking CXX static library libngraph_test_util.a
[ 52%] Built target ngraph_test_util
-- Using src='https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz'
-- Retry after 60 seconds (attempt #5) ...
[ 53%] Linking CXX shared library ../../libinterpreter_backend.so
[ 53%] Built target interpreter_backend
-- Using src='https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz'
CMake Error at ext_mkl-stamp/download-ext_mkl.cmake:159 (message):
Each download failed!

error: downloading 'https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz' failed
     status_code: 1
     status_string: "Unsupported protocol"
     log:
     --- LOG BEGIN ---
     Protocol "https" not supported or disabled in libcurl

Closing connection -1

     --- LOG END ---
     error: downloading 'https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz' failed
     status_code: 1
     status_string: "Unsupported protocol"
     log:
     --- LOG BEGIN ---
     Protocol "https" not supported or disabled in libcurl

Closing connection -1

     --- LOG END ---
     error: downloading 'https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz' failed
     status_code: 1
     status_string: "Unsupported protocol"
     log:
     --- LOG BEGIN ---
     Protocol "https" not supported or disabled in libcurl

Closing connection -1

     --- LOG END ---
     error: downloading 'https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz' failed
     status_code: 1
     status_string: "Unsupported protocol"
     log:
     --- LOG BEGIN ---
     Protocol "https" not supported or disabled in libcurl

Closing connection -1

     --- LOG END ---
     error: downloading 'https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz' failed
     status_code: 1
     status_string: "Unsupported protocol"
     log:
     --- LOG BEGIN ---
     Protocol "https" not supported or disabled in libcurl

Closing connection -1

     --- LOG END ---
     error: downloading 'https://github.com/intel/mkl-dnn/releases/download/v0.18/mklml_lnx_2019.0.3.20190220.tgz' failed
     status_code: 1
     status_string: "Unsupported protocol"
     log:
     --- LOG BEGIN ---
     Protocol "https" not supported or disabled in libcurl

Closing connection -1

     --- LOG END ---

CMakeFiles/ext_mkl.dir/build.make:92: recipe for target 'mkl/src/ext_mkl-stamp/ext_mkl-download' failed
make[5]: *** [mkl/src/ext_mkl-stamp/ext_mkl-download] Error 1
CMakeFiles/Makefile2:264: recipe for target 'CMakeFiles/ext_mkl.dir/all' failed
make[4]: *** [CMakeFiles/ext_mkl.dir/all] Error 2
Makefile:151: recipe for target 'all' failed
make[3]: *** [all] Error 2
Traceback (most recent call last):
File "/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 345, in
main()
File "/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 268, in main
build_ngraph(build_dir, ngraph_src_dir, ngraph_cmake_flags, verbosity)
File "/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 78, in build_ngraph
command_executor(cmd, verbose=True)
File "/root/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 45, in command_executor
raise Exception("Error running command: " + cmd)
Exception: Error running command: make -j4
CMakeFiles/ext_ngraph_tf.dir/build.make:112: recipe for target 'ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build' failed
make[2]: *** [ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build] Error 1
CMakeFiles/Makefile2:109: recipe for target 'CMakeFiles/ext_ngraph_tf.dir/all' failed
make[1]: *** [CMakeFiles/ext_ngraph_tf.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.