Giter VIP home page Giter VIP logo

shred's Introduction

shred - Shared resource dispatcher

Build Status Crates.io MIT/Apache Docs.rs LoC

This library allows to dispatch systems, which can have interdependencies, shared and exclusive resource access, in parallel.

Usage

extern crate shred;

use shred::{DispatcherBuilder, Read, Resource, ResourceId, System, SystemData, World, Write};

#[derive(Debug, Default)]
struct ResA;

#[derive(Debug, Default)]
struct ResB;

#[derive(SystemData)] // Provided with `shred-derive` feature
struct Data<'a> {
    a: Read<'a, ResA>,
    b: Write<'a, ResB>,
}

struct EmptySystem;

impl<'a> System<'a> for EmptySystem {
    type SystemData = Data<'a>;

    fn run(&mut self, bundle: Data<'a>) {
        println!("{:?}", &*bundle.a);
        println!("{:?}", &*bundle.b);
    }
}

fn main() {
    let mut world = World::empty();
    let mut dispatcher = DispatcherBuilder::new()
        .with(EmptySystem, "empty", &[])
        .build();
    world.insert(ResA);
    world.insert(ResB);

    dispatcher.dispatch(&mut world);
}

Please see the benchmark for a bigger (and useful) example.

Required Rust version

1.56.1 stable

Features

  • lock-free
  • no channels or similar functionality used (-> less overhead)
  • allows both automated parallelization and fine-grained control

Contribution

Contribution is highly welcome! If you'd like another feature, just create an issue. You can also help out if you want to; just pick a "help wanted" issue. If you need any help, feel free to ask!

All contributions are assumed to be dual-licensed under MIT/Apache-2.

License

shred is distributed under the terms of both the MIT license and the Apache License (Version 2.0).

See LICENSE-APACHE and LICENSE-MIT.

shred's People

Contributors

aceeri avatar andreacatania avatar azriel91 avatar barskern avatar binero avatar bors[bot] avatar crabm4n avatar dependabot-preview[bot] avatar dependabot-support avatar imberflur avatar issew avatar joshlf avatar kvark avatar marwes avatar object905 avatar pengowen123 avatar rhuagh avatar schell avatar sebastiengllmt avatar shimaowo avatar thinkofname avatar timonpost avatar torkleyy avatar tversteeg avatar veetaha avatar vorner avatar wadelma avatar xaeroxe avatar xmac94x avatar zesterer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shred's Issues

Ongoing code maintenance

  • Update dependencies for both shred, shred-derive.
  • Cargo lints (cargo fix)
  • Clippy lints (manual).

Metatables can trigger segfaults in safe code

I was reading through this code out of interest, and I saw that the metatable can be used to trigger undefined behavior without using any unsafe code. Basically, if an implementation of CastFrom borrows out an object reference to sub-data within the itself, instead of its entire self, it will incorrectly cast raw pointers.

This program trigger a seg fault by dereferencing null:

extern crate shred;

use shred::{CastFrom, MetaTable};

pub trait PointsToU64 {
    fn get_u64(&self) -> u64;
}

impl PointsToU64 for Box<u64> {
    fn get_u64(&self) -> u64 {
        *(&**self)
    }
}

struct MultipleData {
    _number: u64,
    pointer: Box<u64>,
}

impl CastFrom<MultipleData> for PointsToU64 {
    fn cast(t: &MultipleData) -> &Self {
        &t.pointer
    }

    fn cast_mut(t: &mut MultipleData) -> &mut Self {
        &mut t.pointer
    }
}

fn main() {
    let mut table: MetaTable<PointsToU64> = MetaTable::new();
    let md = MultipleData {
        _number: 0x0, // this will be casted to a pointer, then dereferenced
        pointer: Box::new(42),
    };
    table.register(&md);
    if let Some(t) = table.get(&md) {
        println!("{}", t.get_u64());
    }
}

I'm not sure if you're aware of this, but it seems noteworthy to me. One solution I could see would be to make CastFrom an unsafe trait, and create a macro for correctly implementing it on a type. Another approach that might work would be to use the Unsize<T> API, but i'm not sure how stable that is. Another approach would be to modify the metatable to actually invoke the implementation of CastFrom, which seems like the less disruptive solution, but it would add a slight additional runtime cost.

Resources renaming

These are some follow-ups for #109:

  • use a deprecated Resources alias for easier migration
  • adjust README to use World
  • proof read the docs
  • rename parameters / variable names from resources / res to world

Release 0.10.1

Tracking issue for 0.10.1 release. (previous release)

Steps:

  1. Update Cargo.toml version to match the intended version.
  2. Update CHANGELOG.md with today's date.
  3. PR (always good to have CI check).
  4. git checkout master && git pull
  5. cargo publish
  6. git tag $version && git push origin $version

Add `System::dispose`

Requested by @Frizi.

System::dispose would be called by Dispatcher::dispose and can be used to clean up externally allocated resources.

Unresolved

  • Signature: fn dispose(&self, world: &World) / fn dispose(&self, world: &mut World) / fn dispose(&mut self, world: &World) / fn dispose(&mut self, world: &mut World)
  • Is it mandatory to call it? How to make sure it gets called?

Build with parallel feature disabled is broken in 0.9.3

error[E0433]: failed to resolve: could not find `rayon` in `{{root}}`
 --> /home/ralith/.cargo/registry/src/github.com-1ecc6299db9ec823/shred-0.9.3/src/dispatch/dispatcher.rs:7:56
  |
7 | pub type ThreadPoolWrapper = Option<::std::sync::Arc<::rayon::ThreadPool>>;
  |                                                        ^^^^^ could not find `rayon` in `{{root}}`

error[E0609]: no field `thread_pool` on type `dispatch::builder::DispatcherBuilder<'a, 'b>`
   --> /home/ralith/.cargo/registry/src/github.com-1ecc6299db9ec823/shred-0.9.3/src/dispatch/builder.rs:238:28
    |
238 |         dispatcher_builder.thread_pool = self.thread_pool.clone();
    |                            ^^^^^^^^^^^ help: a field with a similar name exists: `thread_local`

error[E0609]: no field `thread_pool` on type `&mut dispatch::builder::DispatcherBuilder<'a, 'b>`
   --> /home/ralith/.cargo/registry/src/github.com-1ecc6299db9ec823/shred-0.9.3/src/dispatch/builder.rs:238:47
    |
238 |         dispatcher_builder.thread_pool = self.thread_pool.clone();
    |                                               ^^^^^^^^^^^

Building without default features should be tested in CI.

Ability to suspend Systems

From what I've seen, there doesn't seem to be the ability to suspend Systems? Basically, after exploring through amethyst and specs, I ended up concluding that shred would presumably be where this functionality would go. Feel free to direct me in the right place if that's not the case.

Right now, I know that amethyst supports a Pausable wrapper which simply checks a condition every time the system is run; this is okay, but it would be nice to be able to suspend a system (making it no longer dispatched) and be able to later signal it to be added back to the queue. Presumably, this would help make it easier to interface with Generators and the like later on.

Alternatively, you could argue that the best case for this is to have multiple Dispatchers and use the right one depending on the state of the system. This might be better for cases where the set of currently running systems is swapped, e.g. a pause menu in a game, rather than the generic case which might be better for I/O. If this is the route that seems best, then I can move this issue to specs and discuss that there.

Anyway, I didn't see a discussion on this, so, I figured I'd start one.

Check if the `Resources` are empty

If a resource couldn't be fetched and Resources is empty, it might be because somebody accidentally created a new instance by specifying it as SystemData. This should be pointed out in the panic.

Why won't Write<R> initialize R if it's missing?

I'm kind of missing the point of Write. It requires a Default instance to initialize the resource, but still panics if used without initializing it manually. So what is the difference from WriteExpect?

Add 'remove' method to Resources.

It seems to be desirable to take ownership again of a resource, without needing to wrap that resource in an Option, by removing it from Resources. For example, in my case, gfx::Device requires to be passed ownership of some things to destroy them, such as buffers, which I am using inside of a Resource. It is possible to wrap these in Option and take() them, but because during dispatch these should never be taken and never be None, it seems, though a very minor inconvenience, an unfortunate solution.

Problem with Dispatcher's System declaration order

let mut dispatcher = DispatcherBuilder::new()
    .with(PhysicsSystem, "physics_system", &["player_control_system"])    
    .with(CollisionSystem, "collision_system", &["physics_system"])    
    .with(PlayerControlSystem::new(recv), "player_control_system", &[])    
    .build();

This panics because the PhysicsSystem depends on the PlayerControlSystem before it is added to the dispatcher. Shouldn't the order in which the systems are added not matter since the dispatcher is built using the builder pattern?

Release 0.9.4

Tracking issue for 0.9.4 release (also tracks what to do in a release).

Shall fill in steps when completed.

Update shred-derive to use in-scope shred name

Instead of hard-coding the shred-derive to use ::shreds, remove the leading :: double colon so that users can use a re-exported shred crate. This means users need to specify use shred in modules, but it means they do not need to keep track of which version of shred that a higher level library uses, as long as the higher level library re-exports the shred crate.

Better fetch panic message

Currently when fetching a resource which has not been inserted, the panic message may be difficult for consumers to understand:

Tried to fetch a resource of type "amethyst::ecs::storage::MaskedStorage<project::CustomComponent>", but the resource does not exist.
Try adding the resource by inserting it manually or using the `setup` method.

The following should be more helpful:

Tried to fetch resource of type `MaskedStorage<CustomComponent>`[^1] from the `World`, but the resource does not exist.

You may ensure the resource exists through one of the following methods:

* Inserting it when the world is created: `world.insert(..)`.
* If the resource implements `Default`, include it in a system's `SystemData`, and ensure the system is registered in the dispatcher.
* If the resource does not implement `Default`, insert in the world during `System::setup`.

[^1]: Full type name: `amethyst::ecs::storage::MaskedStorage<project::CustomComponent>`

Release new version of shred-derive

shred-derive's dependencies have been updated, but it has not been pushed to crates.io

It could be 0.6.2 to allow a smooth update from the ecosystem, as their is no breaking changes.

Validate for conflicts in ParSeq

It would be great if there was a way of confidently writing par! and seq! macros knowing that they'll get validated later. It should be easier to balance stages this way rather than tweaking running_time() and looking at the effects. Right now we're running a risk of conflicting reads with writes if we use these macros. From what I can tell, the logic for validation is already there in StagesBuilder, but it probably can't be reused 1:1.

wasm32 support

Latest stable version doesn't compile to wasm32:
shred = { version = "0.9.3", default-features = false }

  |
7 |         dispatcher::{SystemId, ThreadLocal, ThreadPoolWrapper},
  |                                             ^^^^^^^^^^^^^^^^^ no `ThreadPoolWrapper` in `dispatch::dispatcher`

Release 0.10.2

Tracking issue for 0.10.2 release. (previous release)

Mainly for dependency bumps.

Steps:

  1. Update Cargo.toml version to match the intended version.
  2. Update CHANGELOG.md with today's date.
  3. PR (always good to have CI check).
  4. git checkout master && git pull
  5. cargo publish
  6. git tag $version && git push origin $version

Benchmark and improve dispatching

The current implementation of fetching the next task to execute is very naive (the implementation is about here).

It may be an advantage that, because of the builder, useful information could be pre-computed, although I'm clueless how and what exactly.

Ongoing code maintenance

  • Update dependencies for both shred, shred-derive -- ran cargo upgrade.
  • Cargo lints (cargo fix)
  • Clippy lints (manual).

Notes:

  • warning: needless fn main in doctest appears when running cargo clippy for:

    • src/dispatch/builder.rs:31:4
    • src/dispatch/builder.rs:65:4
    • src/dispatch/par_seq.rs:151:4

    But those are false positives -- cargo test fails without fn main().

Ideas for improving the Batch interface

Hello

I've been mulling a bit over the batch execution API. To be honest, I don't find it very comfortable to use. I see several problems:

  • It's verbose. To use it, you have to implement two traits (System and BatchController), and for the System you have to implement even the methods that you don't really care about most of the time (eg. accessor and setup).
  • Unsafe is sticking out of it even though there's nothing the user needs to uphold towards the library or anything unsafe the user wants to do.
  • As the controller is passed as a type parameter, not as a value, there's no (reasonable) way to pass additional info into it.

If my guess is correct, the System is used mostly to make the implementation easier, because the Dispatcher just works with Systems all the time.

I have a proposal how to make the interface better while not introducing much more complexity.

Let's have a trait, something like this (modulo naming, lifetimes...):

trait BatchController {
    type SystemData: SystemData;
    type Tmp;
    fn run_pre(&mut self, data: Self::SystemData) -> Self::Tmp;
    fn run(&mut self, tmp: Self::Tmp, dispatcher: DispatcherProxy);
}

Then, internally, there would be some wrapper that would implement the System, hold the dispatcher, and the instance of this new BatchController.

The relevant part of the example would look something like this:

// No data needed here, the accessor and dispatcher are part of the wrapper below the surface.
struct CustomBatchControllerSystem;

impl BatchController for CustomBatchControllerSystem }
    type SystemData = Read<TomatoStore>;
    type Tmp = TomatoStore;
    fn run_pre(&mut self, data: Self::SystemData) -> Self::Tmp {
        *data
    }
    fn run(&mut self, _ts: TomatoStore, dispatcher: DispatcherProxy) {
      for _i in 0..3 {
         // The BatchUncheckedWorld is part of the proxy
         dispatcher.dispatch();
      }
}

...
DispatcherBuilder::new()
    .with(SayHelloSystem, "say_hello", &[])
    .with_batch(
         CustomBatchControllerSystem, // Passing value, not a type
         DispatcherBuilder::new().with(..)
         "BatchSystemTest",
         &[],
     )
     .build();

If you like the idea, I can try making a PR with the changes.

Alternatively, it would also be possible to create just the wrapper that would implement the current traits. That would preserve the current API, but the cost is that it would still not be possible to pass values in there, only types.

Implement FetchLock.

You may have a system that work mostly on read resources, but sometimes needs writing some resource, for example to execute callback, that may need to write this some resource, or computes mostly with read resources and then stores result into some write resource. Without locking fetch this will be inefficient, since execution of this system will be delayed, because of this write resource.

Implementation

I think that it has to be implemented with std::sync::RwLock, since Resource trait is Send + Sync anyway.
Also i think that FetchLock resource will be counted as "read resource" internally, but It will be nice if we mark it somehow, to sort system dependencies to make locking on the same resource less probable, because when I roll out PR on #6, this will be possible.

Improve README.md

Add usage example, badges, license, contribution info and description of internals.

Allow runtime `SystemData`

SystemData currently has associated functions for read and write. This prohibits creating more dynamic systems that don't specify what they fetch at compile-time.

Proposal

  • add SystemData::Specifier
  • move SystemData::read and SystemData::write to Specifier and let them take &self
  • add a specifier method to System with a default implementation that works for all current cases

Example

This is what a custom implementation could look like (very primitive).

struct CustomData {
    // ..
}

impl<'a> SystemData<'a> for CustomData {
    type Specifier = CustomSpec;

    fn fetch(spec: &Self::Specifier, res: &'a Resources) -> Self {
        unimplemented!()
    }
}

struct CustomSpec {
    reads: Vec<ResourceId>,
    writes: Vec<ResourceId>,
}

impl Specifier for CustomSpec {
    fn reads(&self) -> Vec<ResourceId> {
        self.reads.clone()
    }

    fn writes(&self) -> Vec<ResourceId> {
        self.writes.clone()
    }
}

Possible complications

The default implementation for System::specifier won't be able to use the usual Specifier::default function as you'd expect, because we can't specialize the default for that method for Specifier: Default.

Unresolved questions

  • Find a better name for Specifier

Name suggestions for Specifier welcome!

Registering systems with the same name should panic

In amethyst/specs/pull/120 I added test that registers multiple systems that modify component and a system that should be executed after modification that reads the components value.

The registered components all have same name as I thought it would allow me to specify that read happens after all of them.

Apparently multiple systems with same name doesn't work like that as HashMap from the name to system ID is used, which means that only the last same-named system is considered and the others are unnameable.

Is this how it should work or just oversight? If it's how it's supposed to work, it should be documented.

Replace System with FnMut

The current code for defining a new task looks like this (from the README)

struct PrintSystem;

impl<'a> System<'a> for PrintSystem {
    type SystemData = PrintData<'a>;

    fn work(&mut self, bundle: PrintData<'a>) {
        println!("{:?}", &*bundle.a);
        println!("{:?}", &*bundle.b);
        
        *bundle.b = ResB; // We can mutate ResB here
                          // because it's `FetchMut`.
    }
}

Every system needs an additional struct (often unit type from my own experience) which becomes quite verbose. Alternative might be replace the Task trait with a function:

fn system_print(_: &mut (), bundle: PrintData<'a>) {
    println!("{:?}", &*bundle.a);
    println!("{:?}", &*bundle.b);
    *bundle.b = ResB;
}

The dispatch builder would then take a F: FnMut(&mut T, U), U: SystemData (or something similar).

let mut dispatcher = DispatcherBuilder::new()
    .add((), system_print, "print", &[]) // Adds a system "print" without dependencies
    .finish();

I'm not 100% sure if this will be possible, tests some time ago looked positive..

Cannot create dynamic `ResourceId`s

The documentation states that a ResourceId is:

The id of a Resource, which is a tuple struct with a type id and an additional resource id (represented with a usize).

However, it only contains a TypeId, which means there's no way to dynamically construct a ResourceId!

Ongoing code maintenance

  • Update dependencies for both shred, shred-derive -- ran cargo upgrade.
  • [ ] Cargo lints (cargo fix)
  • Clippy lints (manual).

How should we share maintenance?

Hiya all, since shred, shrev, specs (some more?) moved from slide-rs, I'm not sure who should be reviewing / approving / merging pull requests, and releasing / publishing (who has permissions?).

My thoughts on issues / code are mostly aligned with amethyst (design decisions, review requirements, testing, style). Releasing / publishing may be on-demand given there is sufficient test coverage / automated guards if something is inconsistent.

Disable parallel for wasm32 target by default?

Rust thread support isn't there yet for WASM (as far as I understand) ๐Ÿ˜ข

Normally this wouldn't be a problem because shred seems to work just fine with WASM as long as you default-features = false in the Cargo.toml. Sadly this is not possible for me because default-features doesn't affect transitive deps and I'm using amethyst_core which is pulling in specs which pulls in shred. (Aka because of a lot of RFCs and issues related to Cargo, I don't think I have a way to compile shred with default-features = false)

Using the DispatcherBuilder causes a panic:
backend.js:1 Panic error message: Invalid configuration: ThreadPoolBuildError { kind: IOError(Custom { kind: Other, error: "operation not supported on wasm yet" }) }

Should parallel be disabled by default for wasm32? I'm not even sure how to add something like that to the Cargo.toml.

How to implement SystemData for Option<CustomRead>?

I have some custom Reader similar to ReadStorage in specs and for this type it makes sense to have an Option<CustomRead> variant and to have an impl<'a, T> SystemData<'a> for Option<CustomRead>>

As neither SystemData<'a> nor Option<T> is defined in the current crate, the complier gives the usual "type parameter T must be used as the type parameter for some local type" error.

Is there any workaround to solve this issue ?
Would it be solved if shred implemented impl<'a,T> SystemData<'a> for Option<T> ?

Replace unsafe code with safer variant

I've been digging in some of the internals of shred with emphasis on unsafe code and I've noticed this particular method:

shred/src/cell.rs

Lines 91 to 112 in c1bfcda

pub fn map<U, F>(self, f: F) -> Ref<'a, U>
where
F: FnOnce(&T) -> &U,
U: ?Sized,
{
// Extract the values from the `Ref` through a pointer so that we do not run
// `Drop`. Because the returned `Ref` has the same lifetime `'a` as the
// given `Ref`, the lifetime we created through turning the pointer into
// a ref is valid.
let flag = unsafe { &*(self.flag as *const _) };
let value = unsafe { &*(self.value as *const _) };
// We have to forget self so that we do not run `Drop`. Further it's safe
// because we are creating a new `Ref`, with the same flag, which will
// run the cleanup when it's dropped.
std::mem::forget(self);
Ref {
flag,
value: f(value),
}
}

I can't see why one would use unsafe here, if I am not missing anything crucial here, this could be replaced with a completely safe alternative:

    pub fn map<U, F>(self, f: F) -> Ref<'a, U>
    where
        F: FnOnce(&T) -> &U,
        U: ?Sized,
    {
        let val = Ref {
            flag: self.flag,
            value: f(self.value),
        };

        std::mem::forget(self);

        val
    }

Playground

Release 0.10.0

Tracking issue for 0.10.0 release. (previous release)

Steps:

  1. Update Cargo.toml version to match the intended version.
  2. Update CHANGELOG.md with today's date.
  3. PR (always good to have CI check).
  4. git checkout master && git pull
  5. cargo publish
  6. git tag $version && git push origin $version

Tools for debugging + profiling dispatch plans

It would be useful to be able to inspect the plans shred comes up with, e.g. what systems execute in parallel when, and so on. It would also be useful to be able to get information about the execution times of various systems after a dispatch. I could imagine writing a tool that hooks into shred and generates a graphviz .dot file that lets you debug systems causing bottlenecks, or an amethyst editor plugin that lets you watch the relative execution times of your various systems in real time.

Thread-local-resources

It would be nice to have thread-local-resources, as in, resources that do not implement Sync.

It would be useful for systems that need access to things like gfx::Device, gfx::Encoder or gfx::pso::buffer::ConstantBuffer.

Scalability of Shred

Something which came to mind, but I don't know if it has been discussed, yet, or if my understanding of how the scheduling works is just wrong :)

Example (from SPECS):

impl<'a> System<'a> for SysA {
    type SystemData = (WriteStorage<'a, Pos>, ReadStorage<'a, Vel>);

    fn run(&mut self, (mut pos, vel): Self::SystemData) {
        // The `.join()` combines multiple components,
        // so we only access those entities which have
        // both of them.
        for (pos, vel) in (&mut pos, &vel).join() {
            pos.0 += vel.0;
        }
    }
}

Given we have several million entities with Pos and Vel components, how does Shred dispatch this system? Does it pass all entities to a single call of run() or does it call run() multiple times with viewer entities, but on its own thread each? If I understand Shred correctly, it passes all entities to a single call of run(), hence a single thread, which would mean that, if I only have one system, only one core of my CPU would be used for all data sequentially, and e.g. 31 other cores would idle around doing nothing (I know, this example is very constructed, but I want to demonstrate my train of thoughts and concern if Shred, hence Specs, hence Amethyst and any other high-level library and application depending on this crate is scheduled suboptimally).

That being said, wouldn't it be more optimal to ideally take the number of systems (which can run in parallel) and entities and CPU cores and feed smaller chunks to the system, so that one system runs multiple times in parallel?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.