Giter VIP home page Giter VIP logo

rocksdict's Introduction

RocksDict / SpeeDict

Key-value storage for Python & Wrapper of Rocksdb and Speedb

CI PyPI PyPI Support python versions

Installation

Wheels available for macOS amd64/arm64, linux amd64/arm64, and windows amd64.

  • pip install rocksdict for rocksdb backend, then from rocksdict import Rdict
  • pip install speedict for speedb backend, then from speedict import Rdict

Introduction

This library has two purposes.

  1. As an on-disk key-value storage solution for Python.
  2. As a RocksDB / Speedict interface.

These two purposes operate in different modes:

  • Default mode, which allows storing int, float, bool, str, bytes, and other python objects (with Pickle).

  • Raw mode (options=Options(raw_mode=True)), which allows storing only bytes.

Easily inspect RocksDB created by C++, Java, or Other Languages

Since v0.3.24b2.

from rocksdict import Rdict

# This will automatically load latest options and column families.
# Note also that this is automatically RAW MODE,
# as it knows that the db is not created by RocksDict
# (since v0.3.24b2).
db = Rdict("db_path")

# list column families
cfs = Rdict.list_cf("db_path")
print(cfs)

# use one of the column families
cf1 = db.get_column_family(cfs[1])

# iterate through all key-value pairs in cf1
for k, v in cf1.items():
    print(f"{k} -> {v}")

# iterate through all wide columns in cf1
for k, v in cf1.entities():
    print(f"{k} -> {v}")

Examples

A minimal example

from rocksdict import Rdict
import numpy as np
import pandas as pd

path = str("./test_dict")

# create a Rdict with default options at `path`
db = Rdict(path)
db[1.0] = 1
db["huge integer"] = 2343546543243564534233536434567543
db["good"] = True
db["bytes"] = b"bytes"
db["this is a list"] = [1, 2, 3]
db["store a dict"] = {0: 1}
db[b"numpy"] = np.array([1, 2, 3])
db["a table"] = pd.DataFrame({"a": [1, 2], "b": [2, 1]})

# reopen Rdict from disk
db.close()
db = Rdict(path)
assert db[1.0] == 1
assert db["huge integer"] == 2343546543243564534233536434567543
assert db["good"] == True
assert db["bytes"] == b"bytes"
assert db["this is a list"] == [1, 2, 3]
assert db["store a dict"] == {0: 1}
assert np.all(db[b"numpy"] == np.array([1, 2, 3]))
assert np.all(db["a table"] == pd.DataFrame({"a": [1, 2], "b": [2, 1]}))

# iterate through all elements
for k, v in db.items():
    print(f"{k} -> {v}")

# batch get:
print(db[["good", "bad", 1.0]])
# [True, False, 1]

# delete Rdict from dict
db.close()
Rdict.destroy(path)

An Example of Raw Mode

This mode allows only bytes as keys and values.

from rocksdict import Rdict, Options

PATH_TO_ROCKSDB = str("path")

# open raw_mode, which allows only bytes
db = Rdict(path=PATH_TO_ROCKSDB, options=Options(raw_mode=True))

db[b'a'] = b'a'
db[b'b'] = b'b'
db[b'c'] = b'c'
db[b'd'] = b'd'

for k, v in db.items():
    print(f"{k} -> {v}")

# close and delete
db.close()
Rdict.destroy(PATH_TO_ROCKSDB)

New Feature Since v0.3.3

Loading Options from RocksDict Path.

Load Options and add A New ColumnFamily

from rocksdict import Options, Rdict
path = str("./rocksdict_path")

opts, cols = Options.load_latest(path)
opts.create_missing_column_families(True)
cols["bytes"] = Options()
self.test_dict = Rdict(path, options=opts, column_families=cols)

Reopening RocksDB Reads DB Options Automatically

import shutil

from rocksdict import Rdict, Options, SliceTransform, PlainTableFactoryOptions
import os

def db_options():
    opt = Options()
    # create table
    opt.create_if_missing(True)
    # config to more jobs
    opt.set_max_background_jobs(os.cpu_count())
    # configure mem-table to a large value (256 MB)
    opt.set_write_buffer_size(0x10000000)
    opt.set_level_zero_file_num_compaction_trigger(4)
    # configure l0 and l1 size, let them have the same size (1 GB)
    opt.set_max_bytes_for_level_base(0x40000000)
    # 256 MB file size
    opt.set_target_file_size_base(0x10000000)
    # use a smaller compaction multiplier
    opt.set_max_bytes_for_level_multiplier(4.0)
    # use 8-byte prefix (2 ^ 64 is far enough for transaction counts)
    opt.set_prefix_extractor(SliceTransform.create_max_len_prefix(8))
    # set to plain-table
    opt.set_plain_table_factory(PlainTableFactoryOptions())
    return opt


# create DB
db = Rdict("./some_path", db_options())
db[0] = 1
db.close()

# automatic reloading all options on reopening
db = Rdict("./some_path")
assert db[0] == 1

# destroy
db.close()
Rdict.destroy("./some_path")

More Examples on BatchWrite, SstFileWrite, Snapshot, RocksDB Options, and etc.

Go to example folder.

Limitations

Currently, do not support merge operation and custom comparator.

Full Documentation

See rocksdict documentation.

rocksdict's People

Contributors

congyuwang avatar dependabot[bot] avatar godtamit avatar jiangdonglai98 avatar scientificworld avatar wbarnha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

rocksdict's Issues

Scalability/Performance

Not really an issue per se but wasn't sure where else to ask. Perhaps these could be part of a FAQ.

  1. Key limit. What is the scalability like in terms of # of keys. E.g., if I add 100M keys to a rocksdict, will it fall over?
  2. Network shareable. If the dict folder is stored on a network share, can it be safely read by multiple concurrent users? What about writes?
  3. Cold boot. Is there a significant performance penalty with opening the database or fetching a key for the first time as it grows in size? I see this with semidbm even on a moderate size db of ~300MB and 5M keys but so far so good with rocksdict.
  4. Duplicate handling. For example, if I am storing long pathnames in my values and there are many duplicates do I need to manually assign those strings (unique_id,path) pairs in the DB manually (and look them up to retrieve them) to avoid propagating those duplicates or this is taken care of in the lib (or by python itself)?
  5. Raw mode. I would imagine there might be some storage efficiency gained by using raw mode over pickle. True?
  6. Writes. I could be wrong but write speed does some slightly slower than semidbm on a moderately large db. Even if true, it's probably worth the tradeoff in terms of read speed for most applications. What should I expect here and does it get worse with size?

Many thanks for making the lib!

BUG: Exception: utf-8 decoding error (using .with_ttl to write, and any other mode to read)

Steps

  • describe what happened
  • steps to reproduce
  • describe what you expected to happen
  • optional: write a test

Describe the bug
When writing a value to rocksdb using the with_ttl mode, while reading the value using any other mode, a utf-8 decoding error occurs.

To Reproduce
Create db using with_ttl, write, close it. Then create the db using read_only, read, error:

from rocksdict import AccessType, Rdict

db = Rdict(
    './test_dict',
    access_type=AccessType.with_ttl(24 * 3600)
)
db[0] = 'test'
db.close()

db = Rdict(
    './test_dict',
    access_type=AccessType.with_ttl(24 * 3600)
)
print(db[0])

db = Rdict(
    './test_dict',
    access_type=AccessType.read_only()
)
print(db[0])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
Exception: utf-8 decoding error

Expected behavior
Expected that the value was read, just as it was using with_ttl mode.
Additional info:

➜ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.2 LTS
Release:        22.04
Codename:       jammy
➜ python --version
Python 3.10.6

Corrupt or unsupported format_version

Hi there,

I keep receiving this error while trying to create or access any new or existing Rdict. Could you please suggest, how to solve that?

Traceback (most recent call last): File "/home/main.py", line 53, in <module> db.ingest_external_file(["test.db/temp1.sst"]) Exception: Corruption: Corrupt or unsupported format_version: 33554440

This is the result of db.repair(path)

Exception: IO error: lock hold by current process, acquire time 1669990343 acquiring thread 139711774802560: tempDict1/LOCK: No locks available

I was running tests with approx. 100M records and some processes crashed. Could it cause the lock and corruption?
Thanks for help

Cannot install lib

Hi,

I tried to install your lib using pip but neither python 3.6 nor python 3.7 worked.

Each time I try the following : pip3 install rocksdict

I get

Collecting rocksdict
  Could not find a version that satisfies the requirement rocksdict (from versions: )
No matching distribution found for rocksdict

Any idea ? Thanks a lot

Exception: Result incomplete: Write stall

Hello,

the db.write with a WriteBatch object is throwing the "Exception: Result incomplete: Write stall" error. I'm having a hard time debugging the issue.

My situation involves a process (gatherer) that generates key-value pairs and puts them in a Queue.
Another process (writer) that gets a batch of key-value pairs from the Queue, checks whether the keys are already in the database, and if they are not then it puts a WriteBatch object. Once the entire batch has been process I call the db.write method.

Do you have any suggestion of how I can investigate this issue?

Thanks,
Michelangelo

(Insert) order guarantee

Hi,

starting from Python 3.7, dict() objects guarantee insert order, much like collections.OrderedDict. Is there any order guarantee or possibilities for this library? I didn't find any statement in the docs.

Best

How to get WideColumns data in " Raw Mode" from rocksdb

The db is created in C++ app and contains a lot of WideColumns data. Is it possible to access these data using RocksDict?
In the code example below value (v) always is 0 but keys (k) is shown as expected.

cf_lst=Rdict.list_cf(GLB_PATH)
opts=Options(raw_mode=True)
db = Rdict(path=GLB_PATH,  options=opts, column_families={cf_lst[1]: opts})
db_cf1 = db.get_column_family(cf_lst[1])
#
for k, v in db_cf1.items():
    print(f'k={k}, v size={len(v)}')

db.close()

OS: Windows 2019 Server
compiler: msvc 19.39.33523
rocksdb: v9.0
RocksDict: v0.3.23

DB size reduces and keys get deleted after re-opening the db

I used WriteBatch to insert ~3000 million keys, all data got inserted successfully and the total db size was ~208 GB at the end of the write operation. I closed the db and then re-opened it. After re-opening, the size of the db reduced to ~35 GB and most of the keys got deleted. Only ~4 million keys were present. I also tried inserting data into 27 column families. This time again the db size was reduced, but the total keys were 0 in each column family. I tried distributing the data to 27 different dbs, but again the same thing happened.

I used this example as a reference: https://github.com/Congyuwang/RocksDict/blob/main/examples/batch_write.py

Is this some kind of limitation or a bug ?

Does it support async?

Rocksdb allows for async db.get. Can it be used with rocksdict or this should be a feature request?

Addition of key_may_exist() method

Thanks for the help with our migration! Because of your changes, I was able to get most of the tests in faust-streaming/faust#478 working smoothly. We almost have all of rocksdict ready to replace python-rocksdb! There is one method remaining to mirror from https://faust-streaming.github.io/python-rocksdb/api/database.html that I particularly need:

key_may_exist(key, fetch=False, verify_checksums=False, fill_cache=True, snapshot=None, read_tier='all')

If the key definitely does not exist in the database, then this method returns False, else True. If the caller wants to obtain value when the key is found in memory, fetch should be set to True. This check is potentially lighter-weight than invoking DB::get(). One way to make this lighter weight is to avoid doing any IOs.

Parameters:

        key (bytes) – Key to check

        fetch (bool) – Obtain also the value if found

For the other params see [rocksdb.DB.get()](https://faust-streaming.github.io/python-rocksdb/api/database.html#rocksdb.DB.get)

Returns:

        (True, None) if key is found but value not in memory

        (True, None) if key is found and fetch=False

        (True, <data>) if key is found and value in memory and fetch=True

        (False, None) if key is not found

I'll try to take a stab at adding it myself, but I'm unsure where to begin currently.

TypeError: cannot pickle 'generator' object

code:

from rocksdict import Rdict

path = str("./test_dict")
db = Rdict(path)
def f(a, b):
    for n in range(a):
        yield b + b[-1] * n


gen = f(4, 'ay')
x = next(gen)
db['gx'] = gen

error:

TypeError: cannot pickle 'generator' object

thinking:

It should be the cause of pickling, but it doesn't fit the readme description of being able to store anything

Support for entity (wide colums) batch put/get

Hi, thank you for this project! I have found it super helpful and really easy to follow the documentation. ⭐⭐⭐⭐⭐ rating.

I'm currently using WriteBatch to write many keys at once, which is necessary for having good performance on my bulk loads. I'm also using the multi-get (get with a list of keys). I now want to try using entities (aka "wide columns") but I don't see the equivalent batch write and multi-get operations. They exist in RocksDB, so any chance these could be exposed in RocksDict soon?

Again, thank you for this project!

Could not find python3.12 install candidate for speedict

I have a python 3.12 env and when I install pip install speedict. I am getting:
ERROR: Could not find a version that satisfies the requirement speedict (from versions: none)
ERROR: No matching distribution found for speedict

I am able to install rocksdict though

you can have just one open session

I have a problem with that why when it's open by a session it will lock for others.

I want to use that to give the option to use other's concurrent

Multi thread/process

I have a process writing entries to the dict I have multiple threads attempting to read values from the dict however I find if the client threads open the dict before the entries are created we get a key not found how do inform consumers its safe to read from disk?

Exception: Corruption: VersionBuilder: Cannot delete table file #75038 from level 1 since it is not in the LSM tree

This exception was raised, and I found this rocksdb related test:

TEST_F(VersionBuilderTest, ApplyFileDeletionNotInLSMTree) {
  UpdateVersionStorageInfo();

  EnvOptions env_options;
  constexpr TableCache* table_cache = nullptr;
  constexpr VersionSet* version_set = nullptr;

  VersionBuilder builder(env_options, &ioptions_, table_cache, &vstorage_,
                         version_set);

  VersionEdit edit;

  constexpr int level = 3;
  constexpr uint64_t file_number = 1234;

  edit.DeleteFile(level, file_number);

  const Status s = builder.Apply(&edit);
  ASSERT_TRUE(s.IsCorruption());
  ASSERT_TRUE(std::strstr(s.getState(),
                          "Cannot delete table file #1234 from level 3 since "
                          "it is not in the LSM tree"));
}

Where should I look to find the root of the issue? Is it caused by my code, or in RocksDict, or simply rockdb?
It happened just once, out of all services that have been running for a long time.

Any help would be appreciated.

One writer, many readers

Hello! Try to use that library for kv store in FastAPI app.
Workers from ginicorn connect with AccessType.read_only(False) and Primary connect with AccessType.read_write().
Writer:

opt = Options(raw_mode=True)
opt.set_max_background_jobs(4)
opt.set_write_buffer_size(1024 * 1024 * 256)
opt.create_if_missing(True)
opt.set_keep_log_file_num(1)
opt.set_db_log_dir(logs_path)
opt.set_optimize_filters_for_hits(True)
opt.optimize_for_point_lookup(1024)
opt.set_max_open_files(1000)
opt.set_wal_dir(wal_path)
opt.set_wal_size_limit_mb(100)
opt.set_wal_ttl_seconds(180)
opt.set_max_total_wal_size(67108864)
opt.set_wal_recovery_mode(DBRecoveryMode.absolute_consistency())
db = Rdict(data_path, options=opt, access_type=AccessType.read_write())

Reader:

opt = Options(raw_mode=True)
opt.set_max_background_jobs(4)
opt.set_write_buffer_size(1024 * 1024 * 256)
opt.create_if_missing(True)
opt.set_keep_log_file_num(1)
opt.set_db_log_dir(logs_path)
opt.set_optimize_filters_for_hits(True)
opt.optimize_for_point_lookup(1024)
opt.set_max_open_files(1000)
opt.set_wal_dir(wal_path)
opt.set_wal_size_limit_mb(100)
opt.set_wal_ttl_seconds(180)
opt.set_max_total_wal_size(67108864)
opt.set_wal_recovery_mode(DBRecoveryMode.absolute_consistency())
db = Rdict(data_path, options=opt, access_type=AccessType.read_only(False))

So, when I change the values, readers don't see the changes without re-opening the database.
As example:
Write process:

db[bytes.fromhex("abcd12")] = 1.to_bytes(1, "little")
db.flush()
db.flush_wal()

Read process:

int.from_bytes(db[bytes.fromhex("abcd12")], "little") # not found
db = Rdict(data_path, options=opt, access_type=AccessType.read_only(False))
int.from_bytes(db[bytes.fromhex("abcd12")], "little") # found: 1

What am I doing wrong?

I get a "Too many open files" error

Hi,
I get this error when tryin to write to a file, I actually have many DB files open, one for each date, and I get:
OSError: Too many open files (os error 24)
I use WriteBatch and write it every 25k samples to accelerate the writetime

set_compression_options documentation improvement

Hello,

I'm trying to increase the amount of compression by changing the compression algorithm to zstd. However, it's not clear how to use the set_compression_options function correctly.

The input parameters w_bits, level, strategy, max_dict_bytes are not explained in the documentation so I'm struggling to understand their meaning.

By just setting the compression type to zsts,

    opt.set_compression_type(DBCompressionType.zstd())
    opt.set_zstd_max_train_bytes(256 * 1024 * 1024)

throws an error,

Exception: Invalid argument: The dictionary size limit (`CompressionOptions::max_dict_bytes`) should be nonzero if we're using zstd's dictionary generator.

that seems to suggest that I should select a value for max_dict_bytes.

AARCH64 whl

I would like to use RocksDict in a project that must support aarch64 on linux

Support for Linux/aarm64

Is it possible to add support for Linux/aarm64?

#0 134.5   • Installing rocksdict (0.3.15)
#0 134.6 
#0 134.6   RuntimeError
#0 134.6 
#0 134.6   Unable to find installation candidates for rocksdict (0.3.15)

Happy to work on a PR, if you point me in the right direction.

Unable to get 'key' from RocksDict

I am doing an ingestion of key value pairs which is key: int and value: float, after ingestion is done I call db.write() also and call db.close().After that, when I zip the Database folder and try to use it on server and unzip it and use the DB it shows keys not present,but in local system keys are present for the same DB. Ingestion size is of around ~80million records. Also even if I close the DB in one IDE and use it over Jupyter notebook behaviour is inconsistent.On notebook it returns values for the keys while as on IDE it doesn't
@Congyuwang

does RocksDict compatible with RocksDB-Python?

Here is my code, I am trying to save some sst files and read with rocksdb in python:

from rocksdict import Rdict, Options, SstFileWriter
import random
import rocksdb

# generate some rand bytes
rand_bytes1 = [random.randbytes(200) for _ in range(100000)]
rand_bytes1.sort()

# write to file1.sst
writer = SstFileWriter(options=Options(raw_mode=True))
writer.open("file1.sst")
for k, v in zip(rand_bytes1, rand_bytes1):
    writer[k] = v

writer.finish()

# bulid Rdict
d = Rdict("tmp", options=Options(raw_mode=True))
d.ingest_external_file(["file1.sst"])
d.close()

# rocksdb read
db = rocksdb.DB("tmp", read_only=True, opts=rocksdb.Options())

And I am getting the error as below:

---------------------------------------------------------------------------
Corruption                                Traceback (most recent call last)
remote%2B10.97.150.93/opt/huawei/data1/z00811450/code/test.ipynb#Y211sdnNjb2RlLXJlbW90ZQ%3D%3D?line=22) db = rocksdb.DB("tmp", read_only=True, opts=rocksdb.Options())

File /opt/huawei/wisebrain/anaconda3/lib/python3.9/site-packages/rocksdb/_rocksdb.pyx:1636, in rocksdb._rocksdb.DB.__cinit__()

File /opt/huawei/wisebrain/anaconda3/lib/python3.9/site-packages/rocksdb/_rocksdb.pyx:76, in rocksdb._rocksdb.check_status()

Corruption: b'Corruption: unknown checksum type 4 from footer of tmp/000019.sst, while checking block at offset 22672231 size 35'

Does this mean that rocksdict is not compatible with rocksdb in python? Looking forward to anyone's help!

how can I delete WAL file of db

Hi Congyu,
My db saved WAL file because I stopped the script manually when writing data. So I reopened db flush() and close() to try to delete them.
But when I open the db in readonly mode, it will still prompt that WAL files exists, unless error_if_log_file_exist is set to False, and there are a large number of xxx.log, LOG and LOG.old files in the folder. How can I delete WAL file and clean logs?

Support for Alpine Linux

FROM python:3.12.4-alpine3.20
RUN python3.12 -m pip install --upgrade pip
&& python3.12 -m pip install rocksdict

Successfully installed pip-24.1.2

Same error here - ERROR: No matching distribution found for rocksdict
or - Could not find a version that satisfies the requirement rocksdict (from versions: none)

Connected issue: #86

compiled on x86

How to solve it?

Perfectly works only on Debian image

Support Speedb

Speedb is a drop-in replacement for RocksDB. That is, it supports the exact same interface. Unfortunately, it doesn't have python wrapper. Could we optionally use RocksDict with speedb instead of rocksdb?

latest wheel does not work on apple sillon

This is the error log I got:

 from rocksdict import Options, Rdict, ReadOptions, WriteBatch, WriteOptions
../../../.pyenv/versions/3.8.6/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/rocksdict/__init__.py:1: in <module>
    from .rocksdict import *
E   ImportError: dlopen(/Users/fengwang/.pyenv/versions/3.8.6/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/rocksdict/rocksdict.cpython-38-darwin.so, 2): Symbol not found: __ZNKSt3__115basic_stringbufIcNS_11char_traitsIcEENS_9allocatorIcEEE3strEv
E     Referenced from: /Users/fengwang/.pyenv/versions/3.8.6/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/rocksdict/rocksdict.cpython-38-darwin.so (which was built for Mac OS X 12.0)
E     Expected in: /usr/lib/libc++.1.dylib

The uname -a yield:

 uname -a
Darwin Fengs-MacBook-Pro.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:06:51 PST 2021; root:xnu-7195.81.3~1/RELEASE_ARM64_T8101 x86_64

The pip install -U rocksdict yield:

Collecting rocksdict
  Downloading rocksdict-0.3.0-cp38-cp38-macosx_10_9_x86_64.macosx_11_0_arm64.macosx_10_9_universal2.whl (6.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.4/6.4 MB 16.2 MB/s eta 0:00:00
Installing collected packages: rocksdict
  Attempting uninstall: rocksdict
    Found existing installation: rocksdict 0.2.16
    Uninstalling rocksdict-0.2.16:
      Successfully uninstalled rocksdict-0.2.16
Successfully installed rocksdict-0.3.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.