Giter VIP home page Giter VIP logo

v-diffusion-pytorch's Introduction

v-diffusion-pytorch

v objective diffusion inference code for PyTorch, by Katherine Crowson (@RiversHaveWings) and Chainbreakers AI (@jd_pressman).

The models are denoising diffusion probabilistic models (https://arxiv.org/abs/2006.11239), which are trained to reverse a gradual noising process, allowing the models to generate samples from the learned data distributions starting from random noise. The models are also trained on continuous timesteps. They use the 'v' objective from Progressive Distillation for Fast Sampling of Diffusion Models (https://openreview.net/forum?id=TIdIXIpzhoI). Guided diffusion sampling scripts (https://arxiv.org/abs/2105.05233) are included, specifically CLIP guided diffusion. This repo also includes a diffusion model conditioned on CLIP text embeddings that supports classifier-free guidance (https://openreview.net/pdf?id=qw8AKxfYbI), similar to GLIDE (https://arxiv.org/abs/2112.10741). Sampling methods include DDPM, DDIM (https://arxiv.org/abs/2010.02502), and PRK/PLMS (https://openreview.net/forum?id=PlKWVd2yBkY).

Thank you to stability.ai for compute to train these models!

Installation

pip install v-diffusion-pytorch

or git clone then pip install -e .

Model checkpoints:

  • CC12M_1 CFG 256x256, SHA-256 4fc95ee1b3205a3f7422a07746383776e1dbc367eaf06a5b658ad351e77b7bda

A 602M parameter CLIP conditioned model trained on Conceptual 12M for 3.1M steps and then fine-tuned for classifier-free guidance for 250K additional steps. This is the recommended model to use.

  • CC12M_1 256x256, SHA-256 63946d1f6a1cb54b823df818c305d90a9c26611e594b5f208795864d5efe0d1f

As above, before CFG fine-tuning. The model from the original release of this repo.

  • YFCC_1 512x512, SHA-256 a1c0f6baaf89cb4c461f691c2505e451ff1f9524744ce15332b7987cc6e3f0c8

A 481M parameter unconditional model trained on a 33 million image original resolution subset of Yahoo Flickr Creative Commons 100 Million.

  • YFCC_2 512x512, SHA-256 69ad4e534feaaebfd4ccefbf03853d5834231ae1b5402b9d2c3e2b331de27907

A 968M parameter unconditional model trained on a 33 million image original resolution subset of Yahoo Flickr Creative Commons 100 Million.

It also contains PyTorch ports of the four models from v-diffusion-jax, danbooru_128, imagenet_128, wikiart_128, wikiart_256:

  • Danbooru SFW 128x128, SHA-256 1728940d3531504246dbdc75748205fd8a24238a17e90feb82a64d7c8078c449

  • ImageNet 128x128, SHA-256 cac117cd0ed80390b2ae7f3d48bf226fd8ee0799d3262c13439517da7c214a67

  • WikiArt 128x128, SHA-256 b3ca8d0cf8bd47dcbf92863d0ab6e90e5be3999ab176b294c093431abdce19c1

  • WikiArt 256x256, SHA-256 da45c38aa31cd0d2680d29a3aaf2f50537a4146d80bba2ca3e7a18d227d9b627

Sampling

Example

If the model checkpoint for cc12m_1_cfg is stored in checkpoints/, the following will generate four images:

./cfg_sample.py "the rise of consciousness":5 -n 4 -bs 4 --seed 0

If they are somewhere else, you need to specify the path to the checkpoint with --checkpoint.

Colab

There is a cc12m_1_cfg Colab (a simplified version of cfg_sample.py) here, which can be used for free.

CFG sampling (best, but only cc12m_1_cfg supports it)

usage: cfg_sample.py [-h] [--images [IMAGE ...]] [--batch-size BATCH_SIZE]
                     [--checkpoint CHECKPOINT] [--device DEVICE] [--eta ETA] [--init INIT]
                     [--method {ddpm,ddim,prk,plms,pie,plms2,iplms}] [--model {cc12m_1_cfg}]
                     [-n N] [--seed SEED] [--size SIZE SIZE]
                     [--starting-timestep STARTING_TIMESTEP] [--steps STEPS]
                     [prompts ...]

prompts: the text prompts to use. Weights for text prompts can be specified by putting the weight after a colon, for example: "the rise of consciousness:5". A weight of 1 will sample images that match the prompt roughly as well as images usually match prompts like that in the training set. The default weight is 3.

--batch-size: sample this many images at a time (default 1)

--checkpoint: manually specify the model checkpoint file

--device: the PyTorch device name to use (default autodetects)

--eta: set to 0 (the default) while using --method ddim for deterministic (DDIM) sampling, 1 for stochastic (DDPM) sampling, and in between to interpolate between the two.

--images: the image prompts to use (local files or HTTP(S) URLs). Weights for image prompts can be specified by putting the weight after a colon, for example: "image_1.png:5". The default weight is 3.

--init: specify the init image (optional)

--method: specify the sampling method to use (DDPM, DDIM, PRK, PLMS, PIE, PLMS2, or IPLMS) (default PLMS). DDPM is the original SDE sampling method, DDIM integrates the probability flow ODE using a first order method, PLMS is fourth-order pseudo Adams-Bashforth, and PLMS2 is second-order pseudo Adams-Bashforth. PRK (fourth-order Pseudo Runge-Kutta) and PIE (second-order Pseudo Improved Euler) are used to bootstrap PLMS and PLMS2 but can be used on their own if you desire (slow). IPLMS is the fourth order "Improved PLMS" sampler from (Fast Sampling of Diffusion Models with Exponential Integrator)[https://arxiv.org/abs/2204.13902].

--model: specify the model to use (default cc12m_1_cfg)

-n: sample until this many images are sampled (default 1)

--seed: specify the random seed (default 0)

--starting-timestep: specify the starting timestep if an init image is used (range 0-1, default 0.9)

--size: the output image size (default auto)

--steps: specify the number of diffusion timesteps (default is 50, can be lower for faster but lower quality sampling, must be much higher with DDIM and especially DDPM)

CLIP guided sampling (all models)

usage: clip_sample.py [-h] [--images [IMAGE ...]] [--batch-size BATCH_SIZE]
                      [--checkpoint CHECKPOINT] [--clip-guidance-scale CLIP_GUIDANCE_SCALE]
                      [--cutn CUTN] [--cut-pow CUT_POW] [--device DEVICE] [--eta ETA]
                      [--init INIT] [--method {ddpm,ddim,prk,plms,pie,plms2,iplms}]
                      [--model {cc12m_1,cc12m_1_cfg,danbooru_128,imagenet_128,wikiart_128,wikiart_256,yfcc_1,yfcc_2}]
                      [-n N] [--seed SEED] [--size SIZE SIZE]
                      [--starting-timestep STARTING_TIMESTEP] [--steps STEPS]
                      [prompts ...]

prompts: the text prompts to use. Relative weights for text prompts can be specified by putting the weight after a colon, for example: "the rise of consciousness:0.5".

--batch-size: sample this many images at a time (default 1)

--checkpoint: manually specify the model checkpoint file

--clip-guidance-scale: how strongly the result should match the text prompt (default 500). If set to 0, the cc12m_1 model will still be CLIP conditioned and sampling will go faster and use less memory.

--cutn: the number of random crops to compute CLIP embeddings for (default 16)

--cut-pow: the random crop size power (default 1)

--device: the PyTorch device name to use (default autodetects)

--eta: set to 0 (the default) while using --method ddim for deterministic (DDIM) sampling, 1 for stochastic (DDPM) sampling, and in between to interpolate between the two.

--images: the image prompts to use (local files or HTTP(S) URLs). Relative weights for image prompts can be specified by putting the weight after a colon, for example: "image_1.png:0.5".

--init: specify the init image (optional)

--method: specify the sampling method to use (DDPM, DDIM, PRK, PLMS, PIE, PLMS2, or IPLMS) (default PLMS). DDPM is the original SDE sampling method, DDIM integrates the probability flow ODE using a first order method, PLMS is fourth-order pseudo Adams-Bashforth, and PLMS2 is second-order pseudo Adams-Bashforth. PRK (fourth-order Pseudo Runge-Kutta) and PIE (second-order Pseudo Improved Euler) are used to bootstrap PLMS and PLMS2 but can be used on their own if you desire (slow). IPLMS is the fourth order "Improved PLMS" sampler from (Fast Sampling of Diffusion Models with Exponential Integrator)[https://arxiv.org/abs/2204.13902].

--model: specify the model to use (default cc12m_1)

-n: sample until this many images are sampled (default 1)

--seed: specify the random seed (default 0)

--starting-timestep: specify the starting timestep if an init image is used (range 0-1, default 0.9)

--size: the output image size (default auto)

--steps: specify the number of diffusion timesteps (default is 1000, can lower for faster but lower quality sampling)

v-diffusion-pytorch's People

Contributors

crowsonkb avatar rom1504 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

v-diffusion-pytorch's Issues

what does this line mean in README?

A weight of 1 will sample images that match the prompt roughly as well as images usually match prompts like that in the training set.

I can't wrap my head around this sentence. Could you please explain it with different wording? Thanks!

--size parameters must be a power of 2

First, thank you for your effort! I've enjoyed experimenting with this code and I've learned a lot from your contributions.

Sorry I'm not much help here, but I noticed that specifying an x or y that isn't a power of 2 raises a RuntimeError. Not sure if that's intended behavior at this stage:

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 24 but got size 25 for tensor number 1 in the list.

Full traceback follows.

$ ./clip_sample.py --size 640 400 "something"
Traceback (most recent call last):
File "./clip_sample.py", line 203, in
main()
File "./clip_sample.py", line 197, in main
run_all(args.n, args.batch_size)
File "./clip_sample.py", line 192, in run_all
outs = run(x[i:i+cur_batch_size], steps, clip_embed[i:i+cur_batch_size])
File "./clip_sample.py", line 180, in run
return sampling.cond_sample(model, x, steps, args.eta, extra_args, cond_fn_)
File "/work/env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/work/v-diffusion-pytorch/diffusion/sampling.py", line 62, in cond_sample
v = model(x, ts * steps[i], **extra_args)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/v-diffusion-pytorch/diffusion/models/cc12m_1.py", line 246, in forward
out = self.net(torch.cat([input, timestep_embed], dim=1))
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/v-diffusion-pytorch/diffusion/models/cc12m_1.py", line 63, in forward
return torch.cat([self.main(input), self.skip(input)], dim=1)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/v-diffusion-pytorch/diffusion/models/cc12m_1.py", line 63, in forward
return torch.cat([self.main(input), self.skip(input)], dim=1)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/v-diffusion-pytorch/diffusion/models/cc12m_1.py", line 63, in forward
return torch.cat([self.main(input), self.skip(input)], dim=1)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/v-diffusion-pytorch/diffusion/models/cc12m_1.py", line 63, in forward
return torch.cat([self.main(input), self.skip(input)], dim=1)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/work/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/work/v-diffusion-pytorch/diffusion/models/cc12m_1.py", line 63, in forward
return torch.cat([self.main(input), self.skip(input)], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 24 but got size 25 for tensor number 1 in the list.

Google Colab

For info for others, to run the code on Google Colab (with GPU toggled ON beforehand), you only have to run this cell:

%pip install -q requests tqdm ftfy

%cd /content
!git clone --recursive https://github.com/crowsonkb/v-diffusion-pytorch.git

%cd /content/v-diffusion-pytorch
%mkdir -p checkpoints
!curl https://v-diffusion.s3.us-west-2.amazonaws.com/cc12m_1_cfg.pth -o checkpoints/cc12m_1_cfg.pth

!./cfg_sample.py "the rise of consciousness":5 -n 4 -bs 4 --seed 0

The README is only missing the installation of ftfy.

With a Tesla K80, the expected run-time seems to be ~ 40 minutes. I have not tried till the end.

Training

Is there a way to train custom diffusion models?

512x512 Model

Hello,

Thanks for releasing the excellent code. Do you have plans to release a model capable of generating 512x512 images and maybe larger?

Is there a lower resolution model for running this project on lower-end GPUs?

Hello! I know this request may seem strange but is there any lower resolution model for those of us running on very modest GPUs who still would like to try to run this project?
I already tried lowering the batch size down to 1 and the image size too but it didn't help...
Also I know I can run this on CPU, but that way it seems to be taking ages to complete 1 single step :)
And I also know that there could be colabs for running it, but I want to run it on my own metal ;)
So yeah I really hope there's something like a smaller model we could use? (It doesn't matter it will look crap, I will be content with that! :) )
Thanks a lot in advance!

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Hello, when running:

python .\cfg_sample.py "something cool"
or
python .\clip_sample.py "something cool"

I get the following errors:

for cfg_sample.py

Using device: cuda:0
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:51<00:00, 10.23s/it]
  0%|                                                                                            | 0/1 [00:51<?, ?it/s]
Traceback (most recent call last):
  File ".\cfg_sample.py", line 170, in <module>
    main()
  File ".\cfg_sample.py", line 164, in main
    run_all(args.n, args.batch_size)
  File ".\cfg_sample.py", line 161, in run_all
    utils.to_pil_image(out).save(f'out_{i + j:05}.png')
  File "C:\Users\my-name\downloads\v-diffusion-pytorch\diffusion\utils.py", line 36, in to_pil_image
    return TF.to_pil_image((x.clamp(-1, 1) + 1) / 2)
  File "C:\Users\my-name\.conda\envs\v-diffusion\lib\site-packages\torchvision\transforms\functional.py", line 134, in to_pil_image
    npimg = np.transpose(pic.numpy(), (1, 2, 0))
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

for clip_sample.py

Using device: cuda:0
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:12<00:00,  2.59s/it]
  0%|                                                                                            | 0/1 [00:12<?, ?it/s]
Traceback (most recent call last):
  File ".\clip_sample.py", line 239, in <module>
    main()
  File ".\clip_sample.py", line 233, in main
    run_all(args.n, args.batch_size)
  File ".\clip_sample.py", line 230, in run_all
    utils.to_pil_image(out).save(f'out_{i + j:05}.png')
  File "C:\Users\my-name\downloads\v-diffusion-pytorch\diffusion\utils.py", line 36, in to_pil_image
    return TF.to_pil_image((x.clamp(-1, 1) + 1) / 2)
  File "C:\Users\my-name\.conda\envs\v-diffusion\lib\site-packages\torchvision\transforms\functional.py", line 134, in to_pil_image
    npimg = np.transpose(pic.numpy(), (1, 2, 0))
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

 

I tried modifying the following line in cfg_sample.py or in clip_sample.py:
for j, out in enumerate(outs):

with
for j, out in enumerate(outs.cpu()): or for j, out in enumerate(outs.detach().cpu()):

and the error goes away but then the all I get is a completely black image...

Any help?
Thanks!

Any idea on how to attach a clip model to a 64x64 unconditional model from openai/improved-diffusion?

Hey! love your work and been following your stuff for a while. I have finetuned a 64x64 unconditional model from openai/improved diffusion. checkpoint

I was curious if you could lend any insight on how to connect CLIP guidance to my model? I have tried repurposing your notebook (https://colab.research.google.com/drive/12a_Wrfi2_gwwAuN3VvMTwVMz9TfqctNj#scrollTo=1YwMUyt9LHG1) however past 100 steps, my models seems to unconverge.

I think perhaps because there is too much noise being added for the smaller image size? How might i fix this?

[Question] Questions about `zero_embed` and `weights`

Thanks for this great work. I'm recently interested in using diffusion model to generate images iteratively. I found your script cfg_sample.py was a nice implementation and I decided to learn from it. However, because I'm new in this field, I've encountered some problems quite hard to understand for me. It'd be great if some hints/suggestions are provided. Thank you!!
My questions are listed below. They're about the script cfg_sample.py.

  1. I noticed in the codes, we've used zero_embed as one of the features for conditioning. What is the purpose of using it? Is it designed to allow the case of no prompt for input?
  2. I also notice that the weight of zero_embed is computed as 1 - sum(weights), I think the 1 is to make them sum to one, but actually the weight of zero_embed could be a negative number, should weights be normalized before all the intermediate noise maps are weighted?

Thanks very much!!

Request for argument

Hi,
Firstly, thank you for releasing your amazing work!

This isn't an issue, but a request for an argument to be added to give the ability to save the image every x steps.
Similar to the "display_every" option in the collab version, but if the value was set to 50 then instead we get something like "output0_step-250.png", "output0_step-300.png" etc.

Hope this isn't too much trouble. I know it's probably simple but it's above my Python skill level which is basically zero :)

Thanks again!

Generated images are completely black?! 😵 What am I doing wrong?

Hello,
I am on Windows 10, and my gpu is a PNY Nvidia GTX 1660 TI 6 Gb.
I installed V-Diffusion like so:

  • conda create --name v-diffusion python=3.8
  • conda activate v-diffusion
  • conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch (as per Pytorch website instructions)
  • pip install requests tqdm

The problem is that when I launch the cfg_sample.py or clip_sample.py command lines, the generated images are completely black, although the inference process seems to run nicely and without errors.

Things I've tried:

  • installing previous pytorch version with conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
  • removing V-Diffusion conda environment completely and recreating it anew
  • uninstalling nvidia drivers and performing a new clean driver install (I tried both Nvidia Studio drivers and Nvidia Game Ready drivers)
  • uninstalling and reinstalling Conda completely

But nothing helped... and at this point I don't know what else to try...

The only interesting piece of information I could gather is that for some reason this problem also happens with another text-to-image project called Big Sleep where similar to V-Diffusion the inference process appears to run correctly but the generated images are all black.

I think there must be some simple detail I'm overlooking... which it's making me go insane... 😵
Please let me know something if you think you can help!
THANKS !

Higher resolution `cc12m_1_cfg` model

First off, I'd like to say the new cc12m_1_cfg model is amazing and thank you for the work you're doing.

Are there any plans to release a 512x512 version of it? I know it's possible to output images at any size, but it's clear they look best at the native 256x256 resolution. While sometimes very beautiful in their own way, higher resolutions tend to repeat patterns and multiple generations of the prompt do not look as unique.

Images don’t seem to evolve with each iteration

Thanks for sharing such an amazing repo!

I am testing a prompt like openAI “an astronaut riding a horse in a photorealistic style” to compare. But somehow the iterations seems to be stuck on the same image.

This is my first test, so could very likely be that I am doing something wrong. Results and settings attached bellow…

B5B8DE32-AF99-4D4C-BEB5-B9F131916845
2F55B7E2-7DB5-42CA-9B75-7384FDEB9303
B752B2AC-75A4-4F1C-A538-523B4249370E
6DC4FB56-9CDF-4F91-90A4-35C8F4D97FA5

Metrics on WikiArt model

Hi!

I wanted to thank you for your work, especially since without you DiscoDiffusion wouldn't exist !

Still, I was wondering if you had the metrics (Precision, Recall, FID and Inception Score) on the 256x256 WikiArt model ?

AttributeError: module 'torch' has no attribute 'special'

torch version: 1.8.1+cu111

python ./cfg_sample.py "the rise of consciousness":5 -n 4 -bs 4 --seed 0
Using device: cuda:0
Traceback (most recent call last):
File "./cfg_sample.py", line 154, in
main()
File "./cfg_sample.py", line 148, in main
run_all(args.n, args.batch_size)
File "./cfg_sample.py", line 136, in run_all
steps = utils.get_spliced_ddpm_cosine_schedule(t)
File "C:\Users\m\Desktop\v-diffusion-pytorch\diffusion\utils.py", line 75, in get_spliced_ddpm_cosine_schedule
ddpm_part = get_ddpm_schedule(big_t + ddpm_crossover - cosine_crossover)
File "C:\Users\m\Desktop\v-diffusion-pytorch\diffusion\utils.py", line 65, in get_ddpm_schedule
log_snr = -torch.special.expm1(1e-4 + 10 * ddpm_t**2).log()
AttributeError: module 'torch' has no attribute 'special'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.