mofanv / ppfl Goto Github PK
View Code? Open in Web Editor NEWPrivacy-preserving Federated Learning with Trusted Execution Environments
License: MIT License
Privacy-preserving Federated Learning with Trusted Execution Environments
License: MIT License
Hi, @mofanv , it's really great work, and I have a question about the use of AES.
void aes_cbc_TA(char* xcrypt, float* gradient, int org_len)
{
IMSG("aes_cbc_TA %s ing\n", xcrypt);
//convert float array to uint_8 one by one
uint8_t *byte;
uint8_t array[org_len*4];
for(int z = 0; z < org_len; z++){
byte = (uint8_t*)(&gradient[z]);
for(int y = 0; y < 4; y++){
array[z*4 + y] = byte[y];
}
}
//set ctx, iv, and key for aes
int enc_len = (int)(org_len/4);
struct AES_ctx ctx;
uint8_t iv[] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f };
uint8_t key[16] = { (uint8_t)0x2b, (uint8_t)0x7e, (uint8_t)0x15, (uint8_t)0x16, (uint8_t)0x28, (uint8_t)0xae, (uint8_t)0xd2, (uint8_t)0xa6, (uint8_t)0xab, (uint8_t)0xf7, (uint8_t)0x15, (uint8_t)0x88, (uint8_t)0x09, (uint8_t)0xcf, (uint8_t)0x4f, (uint8_t)0x3c };
//encryption
AES_init_ctx_iv(&ctx, key, iv);
for (int i = 0; i < enc_len; ++i)
{
if(strncmp(xcrypt, "encrypt", 2) == 0){
AES_CBC_encrypt_buffer(&ctx, array + (i * 16), 16);
}else if(strncmp(xcrypt, "decrypt", 2) == 0){
AES_CBC_decrypt_buffer(&ctx, array + (i * 16), 16);
}
}
//convert uint8_t to float one by one
for(int z = 0; z < org_len; z++){
gradient[z] = *(float*)(&array[z*4]);
}
}
For the encryption, it seems that we do not output the ctxt
? I want to use the workflow of client-server AES, that is, client encrypts the message and sends the ctxt to the server, then decrypted inside the enclave.
Hello @mofanv,
I attempted to run fl_tee_layerwise.sh on an HiKey 960, the same board used in the original PPFL paper PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments. However, I'm getting TEEC_InvokeCommand(forward) failed 0xffff3024 origin 0x3
when running fl_tee_layerwise.sh
, the same error in mofanv/darknetz#14 and mofanv/darknetz#29. Other scripts like fl_tee_standard_noss.sh
and fl_tee_standard_ss.sh
can run correctly.
Since under tz_datasets/cfg folder there don't exist greedy-cnn-aux.cfg
, greedy-cnn-layer1.cfg
, greedy-cnn-layer2.cfg
, greedy-cnn-layer3.cfg
and mnist_greedy-cnn.cfg
files that are required by fl_tee_layerwise.sh
, I manually copy-pasted them from PPFL/server_side_sgx/cfg
.
Error log:
============= initialization =============
============= layer 1 =============
============= round 1 =============
============= copy weights server -> client 1 =============
Warning: Permanently added '[127.0.0.1]:8888' (ECDSA) to the list of known hosts.
real 0m1.711s
user 0m0.008s
sys 0m0.000s
tee weights: 82356 Bytes
============= ssh to the client and local training =============
layer filters size input output
0 conv_TA 2 3 x 3 / 1 32 x 32 x 3 -> 32 x 32 x 2 0.000 BFLOPs
1 conv_TA 2 3 x 3 / 1 32 x 32 x 2 -> 32 x 32 x 2 0.000 BFLOPs
2 connected_TA 2048 -> 10
Prepare session with the TA
Begin darknet
mnist_greedy-cnn
1
workspace_size=110592
3 softmax_TA 10
4 cost_TA 10
Loading weights from /root/models/mnist/mnist_greedy-cnn_global.weights...Done!
Learning Rate: 0.01, Momentum: 0.9, Decay: 5e-05
3000
32 28
output file: /media/results/train_mnist_greedy-cnn_pps0_ppe4.txt
current_batch=10
Loaded: 0.003913 seconds
darknetp: TEEC_InvokeCommand(forward) failed 0xffff3024 origin 0x3
real 0m1.594s
user 0m0.003s
sys 0m0.005s
I checked mofanv/darknetz#14 and mofanv/darknetz#29 and attempted to increase TA_STACK_SIZE
and TA_DATA_SIZE
in ta/include/user_ta_header_defines.h I have the following values, but am still getting the error. I cannot increase them further because that would cause a TEEC_Opensession failed with code 0xffff000c origin 0x3
error as from mofanv/darknetz#32.
/* Provisioned stack size */
#define TA_STACK_SIZE (1 * 1024 * 1024)
/* Provisioned heap size for TEE_Malloc() and friends */
#define TA_DATA_SIZE (12 * 1024 * 1024)
I isolated the command darknetp classifier train -pp_start_f 0 -pp_end 4 -ss 2 "cfg/mnist.dataset" "cfg/mnist_greedy-cnn.cfg" "/root/models/mnist/mnist_greedy-cnn_global.weights"
that failed and tried to run it manually on the client. -pp_start_f 0 -pp_end 4
fails but -pp_start_f 0 -pp_end 3
can run. It seems that layer 4 is the one that cannot fit into TEE memory.
Do you know what the original configuration used in PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments was? Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.