Giter VIP home page Giter VIP logo

hadoop-crypto's Introduction

Autorelease

CircleCI Build Status Maven Central

Seekable Crypto

Seekable Crypto is a Java library that provides the ability to seek within SeekableInputs while decrypting the underlying contents along with some utilities for storing and generating the keys used to encrypt/decrypt the data streams. An implementation of the Hadoop FileSystem is also included that uses the Seekable Crypto library to provide efficient and transparent client-side encryption for Hadoop filesystems.

Supported Ciphers

Currently AES/CTR/NoPadding and AES/CBC/PKCS5Padding are supported.

Disclaimer Neither supported AES mode is authenticated. Authentication should be performed by consumers of this library via an external cryptographic mechanism such as Encrypt-then-MAC. Failure to properly authenticate ciphertext breaks security in some scenarios where an attacker can manipulate ciphertext inputs.

Programatic Example

Source for examples can be found here

byte[] bytes = "0123456789".getBytes(StandardCharsets.UTF_8);

// Store this key material for future decryption
KeyMaterial keyMaterial = SeekableCipherFactory.generateKeyMaterial(AesCtrCipher.ALGORITHM);
ByteArrayOutputStream os = new ByteArrayOutputStream(bytes.length);

// Encrypt some bytes
OutputStream encryptedStream = CryptoStreamFactory.encrypt(os, keyMaterial, AesCtrCipher.ALGORITHM);
encryptedStream.write(bytes);
encryptedStream.close();
byte[] encryptedBytes = os.toByteArray();

// Bytes written to stream are encrypted
assertThat(encryptedBytes).isNotEqualTo(bytes);

SeekableInput is = new InMemorySeekableDataInput(encryptedBytes);
SeekableInput decryptedStream = CryptoStreamFactory.decrypt(is, keyMaterial, AesCtrCipher.ALGORITHM);

// Seek to the last byte in the decrypted stream and verify its decrypted value
byte[] readBytes = new byte[bytes.length];
decryptedStream.seek(bytes.length - 1);
decryptedStream.read(readBytes, 0, 1);
assertThat(readBytes[0]).isEqualTo(bytes[bytes.length - 1]);

// Seek to the beginning of the decrypted stream and verify it's equal to the raw bytes
decryptedStream.seek(0);
decryptedStream.read(readBytes, 0, bytes.length);
assertThat(readBytes).isEqualTo(bytes);

Hadoop Crypto

Hadoop Crypto is a library for per-file client-side encryption in Hadoop FileSystems such as HDFS or S3. It provides wrappers for the Hadoop FileSystem API that transparently encrypt and decrypt the underlying streams. The encryption algorithm uses Key Encapsulation: each file is encrypted with a unique symmetric key, which is itself secured with a public/private key pair and stored alongside the file.

Architecture

The EncryptedFileSystem wraps any FileSystem implementation and encrypts the streams returned by open and close. These streams are encrypted/decrypted by a unique per-file symmetric key which is then passed to the KeyStorageStrategy which stores the key for future access. The provided storage strategy implementation encrypts the symmetric key using a public/private key pair and then stores the encrypted key on the FileSystem with the encrypted file.

Standalone Example

The hadoop-crypto-all.jar can be added to the classpath of any client and used to wrap any concrete backing FileSystem. The scheme of the EncryptedFileSystem is e[FS-scheme] where [FS-scheme] is any FileSystem that can be instantiated statically using FileSystem#get (eg: efile). The FileSystem implementation, public key, and private key must be configured in the core-site.xml as well.

Hadoop Cli

Add hadoop-crypto-all.jar to the classpath of the cli (ex: share/hadoop/common).

Generate public/private keys
openssl genrsa -out rsa.key 2048
# Public Key
openssl rsa -in rsa.key -outform PEM -pubout 2>/dev/null | grep -v PUBLIC | tr -d '\r\n'
# Private Key
openssl pkcs8 -topk8 -inform pem -in rsa.key -outform pem -nocrypt | grep -v PRIVATE | tr -d '\r\n'
core-site.xml
<configuration>
    <property>
        <name>fs.efile.impl</name> <!-- others: fs.es3a.impl or fs.ehdfs.impl -->
        <value>com.palantir.crypto2.hadoop.StandaloneEncryptedFileSystem</value>
    </property>

    <property>
        <name>fs.efs.key.public</name>
        <value>MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqXkSOcB2UpLrlG3scAHDavPnSucxOwRWG12woY5JerYlqyIm7xcNuyLQ/rLPxdlCGgOZOoPzKVXc/3pAeOdPM1LcXLNW8d7Uht3vo7a6SR/mXMiCTMn+9wOx40Bq0ofvx9K4RSpW2lKrlJNUJG+RP5lO7OhB5pveEBMn/8OR2yMLgS58rHQ0nrXXUHqbWiMI8k+eYK7aimexkQDhIXtbqmQ5tAXKyoSMDAyeuDNY8WsYaW15OCwGSIRClNAiwPEGLQCYJQi41IxwQxwN42jQm7fwoVSrN4lAfi5B8EHxFglAZcE8nUTdTnXCbUk9SPz8XXmK4hmK9X4L+2Av4ucNLwIDAQAB</value>
    </property>

    <property>
        <name>fs.efs.key.private</name>
        <value>MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCpeRI5wHZSkuuUbexwAcNq8+dK5zE7BFYbXbChjkl6tiWrIibvFw27ItD+ss/F2UIaA5k6g/MpVdz/ekB4508zUtxcs1bx3tSG3e+jtrpJH+ZcyIJMyf73A7HjQGrSh+/H0rhFKlbaUquUk1Qkb5E/mU7s6EHmm94QEyf/w5HbIwuBLnysdDSetddQeptaIwjyT55grtqKZ7GRAOEhe1uqZDm0BcrKhIwMDJ64M1jxaxhpbXk4LAZIhEKU0CLA8QYtAJglCLjUjHBDHA3jaNCbt/ChVKs3iUB+LkHwQfEWCUBlwTydRN1OdcJtST1I/PxdeYriGYr1fgv7YC/i5w0vAgMBAAECggEASvSLhROEwbzNiRadLmT5Q4Kg19YtRgcC9pOXnbzK7wVE3835HmI55nzdpuj7UGxo+gyBZwoZMD0Tw8MUZOUZeH+7ixye5ddCdGwQo34cIl+DiaH9T20/4Yy2zuYc2QTanqyqZ5z0URejX9FRs9PMkC6EY+/NxetGaiGu3UZoalz7F/5wS8bCaKPkm3AjLvqXHL5KiSbPDPBQj4m+iFWLoWZL9FB1zyif+yBatU4cBCLHaTTgXroItEKcxTwFfyi2l059ItoP5E10djKHpMuPiPrTMS0FHAom3GZAYEFnjRgInR0sIotEwuSDObqcio1PdXRsi5Ul8MxfpXxLSuL+UQKBgQDcvmehBARNDksQJGzIyegKg10eLYdfXFCR+QDZeqJod/pCQ6gtW0aFYAoL0uXiMwQzSb6m7offmXH0JLLqOnjgcZlejHUDSTTWtNOYlGaO7OVgFcnG6/UnCE54eJcaw68auvPB9XW3gm5cfWSNpUI+6aJDBb6BKx8uNMoRreq9wwKBgQDEilhsCgUOIRkJfM5MYUzMT0gR8qt671q+lgTjBDwYvdoQ7BijG6Lbqbp9Xd4nODiw1t7e1Rexw+cuIeRs8NITU4f4Nfe25rRhZ+0n7g9OoCiRUoEsmd7cqDk6pubpw9hW1TKKLzTqExisGFy+bnUA8FFs2TbU9Xeb9kdm1GXgJQKBgAsN9f6YRubc+mFakaAUjGxKW9VxDkB2TQqiX6qEe7GjoILFBJ0Q3x06zAX/j8eeKm2vGb8eXuuRsaU6WUNlnjwPNFEJ06pQdjbyY05W0DQEJRCExtARbPuBbPyXfWm3twMtrZtfAYApJgG3vdtiFUk1Rgz5MqshT7RurFfqT8ElAoGAE2BEOVp/hxYSPtI0EGmjRZ0nUMWozDTesF1f2/Wl6xaEchikkSf/VUKVZRik9x7ez+hPDo7ZiCf1GaIzv926CDe69uhzJG/4JoY1ZjNdBPZbKYCFxZzh0MUw5yxfJXquUFkyY1cmE1GQpB6+vfNry4zlqiJ7+mC8yv5rqaKU7JUCgYBXPYpuQppR1EFj66LSrZ8ebXmt5TtwR839UkgEhLOBkO0cFP2BXVAMx9p0/MYLNIPk7vVpVtRCKYr6tBVdUWCin0obC5O+JzuhilQ0aH3xl5mbiasOvCNPjniaTViRt6zNlaq6RMS4x1LqYUyqc4LUrBbGMWJsdjYqVAi1Rq1FTw==</value>
    </property>
</configuration>
Commands
./bin/hadoop dfs -put file.txt efile:/tmp/file.txt
./bin/hadoop dfs -ls efile:/tmp
./bin/hadoop dfs -cat efile:/tmp/file.txt

Programatic Example

Source for examples can be found here

Initialization

// Get a local FileSystem
FileSystem fs = FileSystem.get(new URI("file:///"), new Configuration());

// Initialize EFS with random public/private key pair
KeyPair pair = TestKeyPairs.generateKeyPair();
KeyStorageStrategy keyStore = new FileKeyStorageStrategy(fs, pair);
EncryptedFileSystem efs = new EncryptedFileSystem(fs, keyStore);

Writing data using EFS

// Init data and local path to write to
byte[] data = "test".getBytes(StandardCharsets.UTF_8);
byte[] readData = new byte[data.length];
Path path = new Path(folder.newFile().getAbsolutePath());

// Write data out to the encrypted stream
OutputStream eos = efs.create(path);
eos.write(data);
eos.close();

// Reading through the decrypted stream produces the original bytes
InputStream ein = efs.open(path);
IOUtils.readFully(ein, readData);
assertThat(data, is(readData));

// Reading through the raw stream produces the encrypted bytes
InputStream in = fs.open(path);
IOUtils.readFully(in, readData);
assertThat(data, is(not(readData)));

// Wrapped symmetric key is stored next to the encrypted file
assertTrue(fs.exists(new Path(path + FileKeyStorageStrategy.EXTENSION)));

Hadoop Configuration Properties

Key Value Default
fs.efs.cipher The cipher used to wrap the underlying streams. AES/CTR/NoPadding
fs.e[FS-scheme].impl Must be set to com.palantir.crypto2.hadoop.StandaloneEncryptedFileSystem
fs.efs.key.public Base64 encoded X509 public key
fs.efs.key.private Base64 encoded PKCS8 private key
fs.efs.key.algorithm Public/private key pair algorithm RSA

License

This repository is made available under the Apache 2.0 License.

FAQ

log.warn lines from CryptoStreamFactory

WARN: Unable to initialize cipher with OpenSSL, falling back to JCE implementation

'Falling back to the JCE implementation' results in slower cipher performance than native OpenSSL. Resolve this by installing a compatible OpenSSL and symlinking it to the correct location, /usr/lib/libcrypto.so. (OpenSSL 1.0 and 1.1 are currently supported)

Note: to support OpenSSL 1.1, we use releases from the Palantir fork of commons-crypto as support has been added to the mainline Apache repo, but no release made since 2016.

Exception in thread "main" java.io.IOException: java.security.GeneralSecurityException: CryptoCipher {org.apache.commons.crypto.cipher.OpenSslCipher} is not available or transformation AES/CTR/NoPadding is not supported.
	at org.apache.commons.crypto.utils.Utils.getCipherInstance(Utils.java:130)
	at ApacheCommonsCryptoLoad.main(ApacheCommonsCryptoLoad.java:10)
Caused by: java.security.GeneralSecurityException: CryptoCipher {org.apache.commons.crypto.cipher.OpenSslCipher} is not available or transformation AES/CTR/NoPadding is not supported.
	at org.apache.commons.crypto.cipher.CryptoCipherFactory.getCryptoCipher(CryptoCipherFactory.java:176)
	at org.apache.commons.crypto.utils.Utils.getCipherInstance(Utils.java:128)
	... 1 more
Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
	at org.apache.commons.crypto.utils.ReflectionUtils.newInstance(ReflectionUtils.java:90)
	at org.apache.commons.crypto.cipher.CryptoCipherFactory.getCryptoCipher(CryptoCipherFactory.java:160)
	... 2 more
Caused by: java.lang.reflect.InvocationTargetException
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
	at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
	at org.apache.commons.crypto.utils.ReflectionUtils.newInstance(ReflectionUtils.java:88)
	... 3 more
Caused by: java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: EVP_CIPHER_CTX_cleanup
	at org.apache.commons.crypto.cipher.OpenSslCipher.<init>(OpenSslCipher.java:59)
	... 8 more
Caused by: java.lang.UnsatisfiedLinkError: EVP_CIPHER_CTX_cleanup
	at org.apache.commons.crypto.cipher.OpenSslNative.initIDs(Native Method)
	at org.apache.commons.crypto.cipher.OpenSsl.<clinit>(OpenSsl.java:95)
	at org.apache.commons.crypto.cipher.OpenSslCipher.<init>(OpenSslCipher.java:57)
	... 8 more

hadoop-crypto's People

Contributors

asvoboda avatar carterkozak avatar ellisjoe avatar iamdanfox avatar j-baker avatar jaceklach avatar leonz avatar lorenzomartini avatar markelliot avatar mswintermeyer avatar nmiyake avatar pwoody avatar robert3005 avatar rshkv avatar sandorw avatar schlosna avatar sjrand avatar splittingfield avatar svc-autorelease avatar svc-excavator-bot avatar tpetracca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hadoop-crypto's Issues

Support renaming directories

For this to work with tools like Spark, support is required for renaming directories. Currently, EncryptedFileSystem naively assumes the path being renamed is a regular file and barfs when it cannot find a key material for the directory. This is related to "Recursive delete support #93", in that there appears to be a general lack of consideration for recursive directory operations.

Seeking with "AES/CBC/PKCS5Padding" Cipher generates java.io.EOFException

I have had no luck with seeking on encrypted file streams when using the "AES/CBC/PKCS5Padding" cipher. On the other hand, the "AES/CTR/NoPadding" cipher works 100% of the time. My case is using StandaloneEncryptedFileSystem to write a parquet file to encrypted local file system and then reading the parquet file. One of the first things done with the parquet reader is to read the footer (one of the last records). This is where it is always failing for me. Seeking to the footer is hitting a premature EOF, typically 2 bytes short. I suspect this has something to do with the padding. FYI, I am using the latest IBM JDK (not Oracle or OpenJDK) ... not that it should make any difference. Any ideas? Should I create a simplified test case for you?

Openssl encryption doesn't support openssl 1.1

Trying to initialize the cipher results in

Exception in thread "main" java.io.IOException: java.security.GeneralSecurityException: CryptoCipher {org.apache.commons.crypto.cipher.OpenSslCipher} is not available or transformation AES/CTR/NoPadding is not supported.
	at org.apache.commons.crypto.utils.Utils.getCipherInstance(Utils.java:130)
	at ApacheCommonsCryptoLoad.main(ApacheCommonsCryptoLoad.java:10)
Caused by: java.security.GeneralSecurityException: CryptoCipher {org.apache.commons.crypto.cipher.OpenSslCipher} is not available or transformation AES/CTR/NoPadding is not supported.
	at org.apache.commons.crypto.cipher.CryptoCipherFactory.getCryptoCipher(CryptoCipherFactory.java:176)
	at org.apache.commons.crypto.utils.Utils.getCipherInstance(Utils.java:128)
	... 1 more
Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
	at org.apache.commons.crypto.utils.ReflectionUtils.newInstance(ReflectionUtils.java:90)
	at org.apache.commons.crypto.cipher.CryptoCipherFactory.getCryptoCipher(CryptoCipherFactory.java:160)
	... 2 more
Caused by: java.lang.reflect.InvocationTargetException
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
	at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
	at org.apache.commons.crypto.utils.ReflectionUtils.newInstance(ReflectionUtils.java:88)
	... 3 more
Caused by: java.lang.RuntimeException: java.lang.UnsatisfiedLinkError: EVP_CIPHER_CTX_cleanup
	at org.apache.commons.crypto.cipher.OpenSslCipher.<init>(OpenSslCipher.java:59)
	... 8 more
Caused by: java.lang.UnsatisfiedLinkError: EVP_CIPHER_CTX_cleanup
	at org.apache.commons.crypto.cipher.OpenSslNative.initIDs(Native Method)
	at org.apache.commons.crypto.cipher.OpenSsl.<clinit>(OpenSsl.java:95)
	at org.apache.commons.crypto.cipher.OpenSslCipher.<init>(OpenSslCipher.java:57)
	... 8 more

Should have been fixed upstream in apache/commons-crypto@2875340

reading 512 bytes at a time

BLUF: Applications that use this library will attempt to read at most 512 bytes at a time, likely due to the behavior of CipherInputStream. I will stop neglecting #35, and try to get that over the line asap.

Earlier today I was investigating the impact of setting fs.s3a.experimental.input.fadvise equal to random for a Spark application that reads Parquet files from an S3 bucket. This is one of the cooler new features in Hadoop 2.8.0, and I was hoping to observe a noticeable improvement in read performance. Disappointingly and confusingly, reads actually became much slower unless I changed the value of fs.s3a.readahead.range to something much larger than the default, which is 64 kb.

DEBUG logging for org.apache.hadoop.s3a on the Spark executors indicated that the S3AInputStream for the object being read was being closed and reopened many times. This doesn't really make sense, since the number of bytes that s3a will request in this case is the maximum of the readahead and value and the length that the application is requesting. No matter how ridiculously small the readahead, the stream should not be closed and reopened repeatedly unless the client is reading in small chunks.

A closer look at the DEBUG logging on S3AInputStream#reopen indicated that the value of the length variable was 512 for each read. That is, the client was reading 512 bytes, closing the stream, reopening it to read another 512 bytes, and so on. (EDIT: The client was requesting 512 bytes at a time, but s3a was reading 64 kb at a time, since max(512,65536)=65536).

I haven't pulled out YourKit or the IntelliJ debugger, so can't confirm for certain, but I'm fairly certain after reading some code that this mysterious 512 number comes from the ibuffer variable in CipherInputStream, and the way it's used in CipherInputstream#read. From what I can tell, that class will read 512 bytes at a time, and it's not configurable.

To confirm that something in this library -- likely but not certainly the use of CipherInputStream -- was the source of the 512 number, I wrote the same data into HDFS twice, once using an hdfs:// url, and the second time using an ehdfs:// url. I then added some logging to DFSInputStream#read which prints the value of the len variable that it's given. For the hdfs:// url, the value of len was consistently 65536 when I read the data, which is the value of io.file.buffer.size for the HDFS cluster in question. However, for the ehdfs:// url, it was consistently 512.

People using this library with s3a in Hadoop 2.7.3 and earlier are unaffected by this issue, as s3a will ignore the length given to it by the client, and always request the value of getFileStatus().getLen(). However, users of this library on Hadoop 2.8.0+ will be unable to take advantage of the random reads feature without specifying a non-default readahead range, a reasonable value for which is hard to compute given this issue. More importantly, anyone who attempts to use this library for data stored in HDFS is likely going to have a bad time.

Switching to openssl, as in #35, is one solution, though there may be others as well.

cc some people: @schlosna @pwoody @robert3005 @ash211

Recursive delete support

Sometime between 2.0.0-rc7 and 2.3.2 recursive deletes started throwing an exception about them being unsupported.

It would be nice if this was supported so that whole folders of encrypted data can be deleted easily. See internal backup solution PR 720.

use OpenSSL for writes

#65 uses OpenSSL for reads, but not for writes.

This would've been fixed by #35, but now that that's closed, there needs to be an alternate implementation.

Override all create and open methods in EncryptedFileSystem

Now that EncryptedFileSystem extends FilterFileSystem, many of the create and open methods will not get forwarded to the create and open methods currently implemented by FilterFileSystem (see: #104). To fix this we either need to go back to implementing FileSystem and overriding all of the methods that are known to be broken (eg: copyFromLocal), or continue to extend FilterFilesystem and override all of the create and open methods.

AesCbcCipher and AesCtrCipher to support key sizes less than 256.

Hi guys,

Are you planning on making the KEY_SIZE static field modifiable so as this library to work when running on JREs/JVMs that do not ship with the enhanced security files? If not, is it something that you feel it is worth contributing back to the project if I coded myself?

Thanks!

cannot distcp into ehdfs

Hadoop's distcp utility first writes output to a temporary directory in the target filesystem: https://github.com/apache/hadoop/blob/branch-2.8.0/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java#L174. This intermediate data is later copied to the destination Path.

Currently, EncryptedFileSystem does not implement the version of create() that is called in the linked code. Therefore the attempt to write intermediate data to ehdfs fails, and instead, the mapper falls back to writing to hdfs. Therefore the data is not encrypted, and there is no symmetric key.

The lack of a symmetric key (correctly) causes EncryptedFileSystem#rename to fail when the intermediate data is promoted to the target path: java.lang.RuntimeException: java.io.FileNotFoundException: File does not exist: /.distcp.tmp.attempt_local913233795_0001_m_000000_0.keymaterial.

confirm perf improvements of reads after switch to OpenSSL

There should be substantive perf improvements on reads after #65. Need to make sure though through some testing. Should also make sure that perf is at least as good as it was on the branch for #35.

More generally, it would be amazing if there were a way to test the perf impact of changes to this library to catch regressions. Something that compares the perf of hdfs vs ehdfs or s3a vs es3a would be great, though I have no idea how one sets up such a thing in the absence of an actual HDFS cluster or S3 bucket (I doubt that fake ones running inside docker containers are particularly useful for testing perf).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.