Giter VIP home page Giter VIP logo

cog-stable-diffusion's Introduction

cog-stable-diffusion's People

Contributors

afiaka87 avatar andreasjansson avatar anotherjesse avatar bfirsh avatar chenxwh avatar daanelson avatar justinmerrell avatar mattt avatar radi-cho avatar uglyrobot avatar vshashkov avatar zeke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cog-stable-diffusion's Issues

CUDA out of memory - SD2.1

When asking for 4 outputs - with everything else the default, I sometimes get:

Output
CUDA out of memory. Tried to allocate 12.66 GiB (GPU 0; 39.59 GiB total capacity; 19.58 GiB already allocated; 5.69 GiB free; 32.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Antidotally, this occurred after I triggered an NSFW exception on the API.

Perhaps throwing an exception doesn't allow torch to reclaim GPU memory - or perhaps it is unrelated

`cog build` fails at `Adding labels to images...

I'm dumb and am trying to follow this guide to set up this repo on a google cloud instance. It failed multiple times, even after clearing out docker in between runs. Have you seen this before? Here are my logs:

andre@stable-diffusion-api:~/cog-stable-diffusion$ cog build
⚠ Cog doesn't know if CUDA 11.6.2 is compatible with PyTorch 1.12.1 --extra-index-url=https://download.pytorch.org/whl/cu116. This might cause CUDA problems.
Building Docker image from environment in cog.yaml as cog-cog-stable-diffusion...
[+] Building 430.8s (17/17) FINISHED
 => [internal] load build definition from Dockerfile                                                                                           0.0s
 => => transferring dockerfile: 1.69kB                                                                                                         0.0s
 => [internal] load .dockerignore                                                                                                              0.0s
 => => transferring context: 2B                                                                                                                0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                                                                   0.4s
 => docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc                     0.6s
 => => resolve docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc                         0.0s
 => => sha256:e3ee2e6b536452d876b1c5aa12db9bca51b8f52b2505178cae6d13e33daeed2b 528B / 528B                                                     0.0s
 => => sha256:86e43bba076d67c1a890cbc07813806b11eca53843dc643202d939b986c8c332 1.21kB / 1.21kB                                                 0.0s
 => => sha256:3cc8e449ce9f6e0752ede8f50a7334bf0c7b2d24d76da2ffae7aa6a729dd1da4 9.64MB / 9.64MB                                                 0.3s
 => => sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc 1.69kB / 1.69kB                                                 0.0s
 => => extracting sha256:3cc8e449ce9f6e0752ede8f50a7334bf0c7b2d24d76da2ffae7aa6a729dd1da4                                                      0.2s
 => [internal] load metadata for docker.io/nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04                                                         0.5s
 => [internal] load build context                                                                                                              0.0s
 => => transferring context: 114.38kB                                                                                                          0.0s
 => [stage-0 1/9] FROM docker.io/nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04@sha256:91779650798919050553d6673d04a96bfd7cdf3fa931b6ff32120b7f  88.9s
 => => resolve docker.io/nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04@sha256:91779650798919050553d6673d04a96bfd7cdf3fa931b6ff32120b7f2eaaa4ad   0.0s
 => => sha256:624cfe37262e44c78cd50ec7bc8a98e417be3b81bb215eed099bf7da343b1a63 16.20kB / 16.20kB                                               0.0s
 => => sha256:3a665e4036160d33fc32ce19c889271febc4fe4cdb82637cdd8c9ef10ca9541c 23.60MB / 23.60MB                                               0.4s
 => => sha256:91779650798919050553d6673d04a96bfd7cdf3fa931b6ff32120b7f2eaaa4ad 743B / 743B                                                     0.0s
 => => sha256:828017b99ce1b75fe6307d54f97bdd9f224730a1a3f31d935ce85209cb08c93c 2.43kB / 2.43kB                                                 0.0s
 => => sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83 28.58MB / 28.58MB                                               0.6s
 => => sha256:cb69caf25724810d45aec6392c11de6f67c2efde38da9d46402857dedb68aefb 7.93MB / 7.93MB                                                 0.5s
 => => sha256:bcf9a52c75ac54dc66b5b683fc8dc2bb8a1cc8eef9340433a3efbfe8d6fa3006 187B / 187B                                                     0.5s
 => => sha256:8941157b58ada869bd12299ba8a56cdc5317923a5ce7df8158c5a3b44ff2fb67 6.43kB / 6.43kB                                                 0.6s
 => => sha256:9ce4eceb346b57d4be964a3a57551204b7435b2eae1d8c3154ef672661d710e4 1.12GB / 1.12GB                                                28.3s
 => => sha256:01035e520ac2fd77f181410687ecc81e9df074e738f8a4dfea4589572c663bd8 1.44GB / 1.44GB                                                32.8s
 => => sha256:b1c91cec61e689319fc5354207f545dc4d065599ae530f8aae92bded993c72e3 62.66kB / 62.66kB                                               0.7s
 => => extracting sha256:eaead16dc43bb8811d4ff450935d607f9ba4baffda4fc110cc402fa43f601d83                                                      1.0s
 => => sha256:246ba1b32462cea77206e04797df75780a0bdebd112b020b9bf11a7c6694eaa1 85.60kB / 85.60kB                                               0.8s
 => => sha256:dc3b3af10cf618c7b3038ca70bbcc422701fa0bf6cb9afae277406df7140445a 1.48GB / 1.48GB                                                24.5s
 => => extracting sha256:cb69caf25724810d45aec6392c11de6f67c2efde38da9d46402857dedb68aefb                                                      0.3s
 => => extracting sha256:3a665e4036160d33fc32ce19c889271febc4fe4cdb82637cdd8c9ef10ca9541c                                                      0.5s
 => => extracting sha256:bcf9a52c75ac54dc66b5b683fc8dc2bb8a1cc8eef9340433a3efbfe8d6fa3006                                                      0.0s
 => => extracting sha256:8941157b58ada869bd12299ba8a56cdc5317923a5ce7df8158c5a3b44ff2fb67                                                      0.0s
 => => extracting sha256:9ce4eceb346b57d4be964a3a57551204b7435b2eae1d8c3154ef672661d710e4                                                     13.8s
 => => extracting sha256:b1c91cec61e689319fc5354207f545dc4d065599ae530f8aae92bded993c72e3                                                      0.0s
 => => extracting sha256:01035e520ac2fd77f181410687ecc81e9df074e738f8a4dfea4589572c663bd8                                                     21.0s
 => => extracting sha256:246ba1b32462cea77206e04797df75780a0bdebd112b020b9bf11a7c6694eaa1                                                      0.0s
 => => extracting sha256:dc3b3af10cf618c7b3038ca70bbcc422701fa0bf6cb9afae277406df7140445a                                                     20.2s
 => [stage-0 2/9] RUN rm -f /etc/apt/sources.list.d/cuda.list &&     rm -f /etc/apt/sources.list.d/nvidia-ml.list &&     apt-key del 7fa2af80  3.8s
 => [stage-0 3/9] RUN --mount=type=cache,target=/var/cache/apt apt-get update -qq && apt-get install -qqy --no-install-recommends  make  bui  25.5s
 => [stage-0 4/9] RUN curl -s -S -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bash &&  git clone  134.3s
 => [stage-0 5/9] COPY .cog/tmp/build3839558126/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none-any.whl                             0.0s
 => [stage-0 6/9] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none-any.whl                              11.4s
 => [stage-0 7/9] RUN --mount=type=cache,target=/root/.cache/pip pip install   diffusers==0.6.0 torch==1.12.1 --extra-index-url=https://dow  146.2s
 => [stage-0 8/9] WORKDIR /src                                                                                                                 0.0s
 => [stage-0 9/9] COPY . /src                                                                                                                  0.1s
 => exporting to image                                                                                                                         0.0s
 => => exporting layers                                                                                                                        0.0s
 => => writing image sha256:55b1710a44395212351d427a7761e45845e5136e435145c8c20d4ee3c0cfccdc                                                   0.0s
 => => naming to docker.io/library/cog-cog-stable-diffusion                                                                                    0.0s
 => exporting cache                                                                                                                            0.0s
 => => preparing build cache for export                                                                                                        0.0s
Adding labels to image...

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown.

ⅹ Failed to get type signature: exit status 125

Sensitive NSFW filter

trying to build a website allowing people to generate SD images.

based on one of my test prompts 'art by alfredo rodriguez, man in water, dynamic, movement, beautiful oil painting, fine art, award-winning art, beautiful lighting, intricate detail' the replicate api returns an error, with additional info including the following : 'Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed.' is there any way to shut off the nsfw filter? otherwise this filter seems too sensitive for everyday use

Using a scheduler that works for pretrained dreambooth weights and default weights

We are in the process of consolidate the prediction from replicate/dreambooth-template into this repo, so the DreamBooth API will use the same predictor code that https://replicate.com/stability-ai/stable-diffusion uses, so all the nice features we add to the "canonical" model will become available to newly trained DreamBooth models.

From #59

The only other major difference between this and dreambooth-template is that it has a hardcoded scheduler:

    scheduler = DDIMScheduler(
        beta_start=0.00085,
        beta_end=0.012,
        beta_schedule="scaled_linear",
        clip_sample=False,
        set_alpha_to_one=False,
    )

The default scheduler seems to work - although I don't know if those "magic numbers" in the DDIMScheduler in dreambooth-template are to maximize the quality from the dreambooth generations?

Init_image not working

When ever I pass init_image data with Http Api, response status never changes from starting to anything else .

unexpected EOF

cog build -t realms-adventurers-v3 --debug
Setting CuDNN to version 11.6
Building Docker image from environment in cog.yaml as realms-adventurers-v3...
$ docker build --file - --build-arg BUILDKIT_INLINE_CACHE=1 --tag realms-adventurers-v3 --progress auto .
[+] Building 1.8s (22/23)                                                                        
 => [internal] load .dockerignore                                                           0.0s
 => => transferring context: 94B                                                            0.0s
 => [internal] load build definition from Dockerfile                                        0.0s
 => => transferring dockerfile: 2.05kB                                                      0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                0.9s
 => [auth] docker/dockerfile:pull token for registry-1.docker.io                            0.0s
 => CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbd  0.0s
 => [internal] load build definition from Dockerfile                                        0.0s
 => [internal] load .dockerignore                                                           0.0s
 => [internal] load metadata for docker.io/nvidia/cuda:11.6.0-cudnn8-devel-ubuntu20.04      0.7s
 => [auth] nvidia/cuda:pull token for registry-1.docker.io                                  0.0s
 => [internal] load build context                                                           0.0s
 => => transferring context: 40.99kB                                                        0.0s
 => [stage-0  1/12] FROM docker.io/nvidia/cuda:11.6.0-cudnn8-devel-ubuntu20.04@sha256:6a4e  0.0s
 => CACHED [stage-0  2/12] RUN rm -f /etc/apt/sources.list.d/cuda.list &&     rm -f /etc/a  0.0s
 => CACHED [stage-0  3/12] RUN --mount=type=cache,target=/var/cache/apt set -eux; apt-get   0.0s
 => CACHED [stage-0  4/12] RUN --mount=type=cache,target=/var/cache/apt apt-get update -qq  0.0s
 => CACHED [stage-0  5/12] RUN curl -s -S -L https://raw.githubusercontent.com/pyenv/pyenv  0.0s
 => CACHED [stage-0  6/12] COPY .cog/tmp/build346449291/cog-0.0.1.dev-py3-none-any.whl /tm  0.0s
 => CACHED [stage-0  7/12] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp  0.0s
 => CACHED [stage-0  8/12] COPY .cog/tmp/build346449291/requirements.txt /tmp/requirements  0.0s
 => CACHED [stage-0  9/12] RUN --mount=type=cache,target=/root/.cache/pip pip install -r /  0.0s
 => CACHED [stage-0 10/12] RUN pip install triton                                           0.0s
 => CACHED [stage-0 11/12] WORKDIR /src                                                     0.0s
 => [stage-0 12/12] COPY . /src                                                             0.0s
 => preparing layers for inline cache                                                       0.1s
unexpected EOF
ⅹ Failed to build Docker image: exit status 1

I have this on both
cog version 0.6.1 (built 2022-12-15T20:25:16Z) and cog version 0.7.0-beta16 (built 2023-03-30T15:51:06Z)

Reducing inference timings for Sd2.1 Base model

I managed to shave off inference timings for SD2.1 by a few seconds for 512x512 (50 steps) and 768x768 (50 Steps).

Using just few additions:

torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True

pipe = StableDiffusionPipeline.from_pretrained(
            MODEL_ID,
            cache_dir=MODEL_CACHE,
            local_files_only=True,
        )
pipe = pipe.to("cuda")

pipe.enable_xformers_memory_efficient_attention()
pipe.enable_vae_slicing()

Overall output didn't suffer coz of this. Getting crisp images. Wanted to know how do I create a PR to add these? And are there any tests around this?

Here are the inferences:

TypeError: unsupported operand type(s) for *: 'FieldInfo' and 'FieldInfo'

Update: See here for the stunning conclusion to the saga.

Cog Info:

$ cog --version
cog version 0.6.1 (built 2022-12-15T20:25:16Z)

Branch Info:

commit 3851052

Steps to Reproduce:

  1. Clone the cog-stable-diffusion repo.
  2. Download the models with the download-models script.
  3. Run a build with cog build -t cog-stable-diffusion.
  4. Hit the /prediction endpoint with a prompt and, optionally, a width. (See below for sample.)

Sample requests:

requests.post(DIFFUSION_ENDPOINT, json={"prompt": "An oil painting of a sentient potato wearing brilliant silver armor, 4k, high quality, trending", "width": 512, "height": 512, "num_inference_steps": 10})

requests.post(DIFFUSION_ENDPOINT, headers={"content-type": "application/json"}, json={"prompt": "An oil painting of a sentient potato wearing brilliant silver armor, 4k, high quality, trending", "num_inference_steps": 10})

Expected Behavior:

The container will spin for a while and spit back an image based on the prompt.

Observed Behavior:

The endpoint returns an exception:

INFO:     136.24.69.10:63320 - "POST /predictions HTTP/1.1" 200 OK
Using seed: description='Random seed. Leave blank to randomize the seed' extra={'choices': None}
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/cog/server/worker.py", line 209, in _predict
result = self._predictor.predict(**payload)
File "/root/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/src/predict.py", line 104, in predict
if width * height > 786432:
TypeError: unsupported operand type(s) for *: 'FieldInfo' and 'FieldInfo'
INFO:     136.24.69.10:63392 - "POST /predictions HTTP/1.1" 200 OK

Reproducibility:

~50% of the time.

The line in the code:
https://github.com/replicate/cog-stable-diffusion/blob/main/predict.py#L99

        if width * height > 786432:
            raise ValueError(
                "Maximum size is 1024x768 or 768x1024 pixels, because of memory limits. Please select a lower width or height."
            )

This looks completely innocuous. width and height are both integers and are assigned the value of 'Input', which is a subclass of FieldInfo, according to FastAPI/Pydantic.

Cog doesn't know if CUDA is compatible with PyTorch / Docker is missing required device driver

Cog says it's not sure about the compatibility up front, then (after a lot of downloads) it has Docker say: "Docker is missing required device driver".
I figured this is an issue since Cog pitches itself as:
" - 📦 Docker containers without the pain.

  • 🤬️ No more CUDA hell. Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will set it all up correctly for you."

This is my log:
cog-stable-diffusion$ sudo cog run script/download-weights hf_******************************
⚠ Cog doesn't know if CUDA 11.6.2 is compatible with PyTorch 1.12.1 --extra-index-url=https://download.pytorch.org/whl/cu116. This might cause CUDA problems.
Building Docker image from environment in cog.yaml...
[+] Building 2.0s (16/16) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.67kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1.2 0.9s
=> CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc 0.0s
=> [internal] load metadata for docker.io/nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04 0.6s
=> [stage-0 1/8] FROM docker.io/nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04@sha256:55211df43bf393d3393559d5ab53283d4ebc3943d802b04 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 31.63kB 0.0s
=> CACHED [stage-0 2/8] RUN rm -f /etc/apt/sources.list.d/cuda.list && rm -f /etc/apt/sources.list.d/nvidia-ml.list && apt 0.0s
=> CACHED [stage-0 3/8] RUN --mount=type=cache,target=/var/cache/apt apt-get update -qq && apt-get install -qqy --no-install-recom 0.0s
=> CACHED [stage-0 4/8] RUN curl -s -S -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bas 0.0s
=> CACHED [stage-0 5/8] COPY .cog/tmp/build1496174735/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none-any.whl 0.0s
=> CACHED [stage-0 6/8] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none-any.whl 0.0s
=> CACHED [stage-0 7/8] RUN --mount=type=cache,target=/root/.cache/pip pip install diffusers==0.2.4 torch==1.12.1 --extra-index- 0.0s
=> CACHED [stage-0 8/8] WORKDIR /src 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:1c81aeabd3aa4357e1eda8a0c8ea7add1172a525b025079f2361d745f88beb33 0.0s
=> => naming to docker.io/library/cog-cog-stable-diffusion-base 0.0s
=> exporting cache 0.0s
=> => preparing build cache for export 0.0s

Running 'script/download-weights hf_******************************' in Docker with the current directory mounted as a volume...
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
ⅹ Docker is missing required device driver

nvidia-smi
Wed Aug 31 14:08:41 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 33% 35C P8 1W / 38W | 5MiB / 2002MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 3517 G /usr/lib/xorg/Xorg 2MiB |
+-----------------------------------------------------------------------------+

docker -v
Docker version 20.10.12, build 20.10.12-0ubuntu2~20.04.1

cat /etc/issue
Ubuntu 20.04.5 LTS

Make 512x512 the default output size?

768x768 is the current default and the outputs tend to be pretty weird. If you're generating humans or animals, they tend to have extra heads or limbs.

I know @anotherjesse can attest to this.

Maybe we should make 512 the default?

System has not been booted with systemd as init system (PID 1). Can't operate.

This might a linux quesiton isntead of Replicate question.

When I tried to run cog run script/download-weights

it says
Building Docker image from environment in cog.yaml... ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ⅹ Failed to build Docker image: exit status 1

When I do sudo systemctl status docker, it returns
System has not been booted with systemd as init system (PID 1). Can't operate. Failed to connect to bus: Host is down

I'm using the Linux machine on vast.ai

Legacy inpainting pipeline/standard model

Is there a specific reason why you still use the legacy inpainting pipeline and not the new one?

Why don’t you use the for inpainting optimized inpainting model?

Support for Dreambooth Weights

I've been experimenting with supporting weights from dreambooth in this model:

diff --git a/predict.py b/predict.py
index 5630646..2d23a87 100644
--- a/predict.py
+++ b/predict.py
@@ -10,6 +10,7 @@ from diffusers import (
     StableDiffusionPipeline,
 )
 
+USE_WEIGHTS = os.path.exists("weights")
 MODEL_ID = "stabilityai/stable-diffusion-2-1"
 MODEL_CACHE = "diffusers-cache"
 
@@ -18,11 +19,18 @@ class Predictor(BasePredictor):
     def setup(self):
         """Load the model into memory to make running multiple predictions efficient"""
         print("Loading pipeline...")
-        self.pipe = StableDiffusionPipeline.from_pretrained(
-            MODEL_ID,
-            cache_dir=MODEL_CACHE,
-            local_files_only=True,
-        ).to("cuda")
+        if USE_WEIGHTS:
+            self.pipe = StableDiffusionPipeline.from_pretrained(
+                "weights",
+                safety_checker=None,
+                torch_dtype=torch.float16,
+            ).to("cuda")
+        else:
+            self.pipe = StableDiffusionPipeline.from_pretrained(
+                MODEL_ID,
+                cache_dir=MODEL_CACHE,
+                local_files_only=True,
+            ).to("cuda")
 
     @torch.inference_mode()
     def predict(

The only other major difference between this and dreambooth-template is that it has a hardcoded scheduler:

    scheduler = DDIMScheduler(
        beta_start=0.00085,
        beta_end=0.012,
        beta_schedule="scaled_linear",
        clip_sample=False,
        set_alpha_to_one=False,
    )

The default scheduler seems to work - although I don't know if those "magic numbers" in the DDIMScheduler in dreambooth-template are to maximize the quality from the dreambooth generations?

image

With the above patch all you have to do unzip the weights generated by this api https://replicate.com/replicate/dreambooth into cog-stable-diffusion and cog build

Basic cog setup fails with "ERROR: failed to receive status: rpc error: code = Unavailable desc = error reading from server: EOF"

I'm following the initial steps from Getting started, after making sure that Docker engine was successfully installed.

mkdir cog-quickstart
cd cog-quickstart
# create cog.yaml file
sudo cog run python

The last command fails:

Building Docker image from environment in cog.yaml...
[+] Building 0.3s (13/14)                                                                                                       
 => [internal] load .dockerignore                                                                                          0.0s
 => => transferring context: 2B                                                                                            0.0s
 => [internal] load build definition from Dockerfile                                                                       0.0s
 => => transferring dockerfile: 907B                                                                                       0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                                               0.1s
 => CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a  0.0s
 => [internal] load .dockerignore                                                                                          0.0s
 => [internal] load build definition from Dockerfile                                                                       0.0s
 => [internal] load metadata for docker.io/library/python:3.8                                                              0.1s
 => [stage-0 1/5] FROM docker.io/library/python:3.8@sha256:3e6443f94e3c82d4cce045f777042c67cff0fa3cdaa55b3ac7c36101e9b040  0.0s
 => [internal] load build context                                                                                          0.0s
 => => transferring context: 40.37kB                                                                                       0.0s
 => CACHED [stage-0 2/5] RUN --mount=type=cache,target=/var/cache/apt set -eux; apt-get update -qq; apt-get install -qqy   0.0s
 => CACHED [stage-0 3/5] COPY .cog/tmp/build3636995091/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none-any.whl  0.0s
 => CACHED [stage-0 4/5] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none-any.whl    0.0s
 => CACHED [stage-0 5/5] WORKDIR /src                                                                                      0.0s
 => preparing layers for inline cache                                                                                      0.0s
ERROR: failed to receive status: rpc error: code = Unavailable desc = error reading from server: EOF
ⅹ Failed to build Docker image: exit status 1

Any additional info I could provide to help debug this?

Use fp32 instead of fp16

Discord user cakeofzerg#3653 investigated the effect of using fp16 instead of fp32 and found that fp32 produces better results overall.

We should change the Replicate model to use fp32.

AMD GPU on Ubuntu - docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].

Couldn't find a solution to this error message online, but didn't want to post into the void of Discord in case someone else has this problem in the future.

Command ran:

$ cog predict r8.im/stability-ai/stable-diffusion@sha256:be04660a5b93ef2aff61e3668dedb4cbeb14941e62a3fd5998364a32d613e35e \
  -i prompt=hi \
  -i width=512 \
  -i height=512 \
  -i prompt_strength=0.8 \
  -i num_outputs=1 \
  -i num_inference_steps=50 \
  -i guidance_scale=0.8

Output:

Starting Docker image r8.im/stability-ai/stable-diffusion@sha256:be04660a5b93ef2aff61e3668dedb4cbeb14941e62a3fd5998364a32d613e35e and running setup()...
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
ⅹ Failed to start container: exit status 125

Docker version: 20.10.17, build 100c701
OS: Xubuntu 22.04 LTS x86_64
CPU: AMD Ryzen 7 5700U with Radeon Graphics
GPU: AMD ATI 05:00.0 Lucienne

Tried restarting docker service, but same result

Any help appreciated - Can try things/provide more logs as needed. I'm not terribly knowledgeable in this area however, so please bare with me!

docker: invalid character 'c' looking for beginning of value.

Setup:
Windows 10
Docker (last)

full log (not sure how to make a spoiler):

λ docker run -d -p 5000:5000 --gpus=all r8.im/stability-ai/stable-diffusion@sha256:a9758cbfbd5f3c2094457d996681af52552901775aa2d6dd0b17fd15df959bef
Unable to find image 'r8.im/stability-ai/stable-diffusion@sha256:a9758cbfbd5f3c2094457d996681af52552901775aa2d6dd0b17fd15df959bef' locally
r8.im/stability-ai/stable-diffusion@sha256:a9758cbfbd5f3c2094457d996681af52552901775aa2d6dd0b17fd15df959bef: Pulling from stability-ai/stable-diffusion
d5fd17ec1767: Already exists
d7c6ec6e1327: Already exists
943d01f4776a: Already exists
23025e0ea7d2: Already exists
8acc1c2b01b0: Already exists
894d0771aab5: Extracting [=============> ] 312MB/1.116GB
8451e5a9bff2: Download complete
a6b5bd0a44ab: Downloading [===========================================> ] 1.244GB/1.441GB
364573d3ec98: Download complete
75b4e35cee13: Downloading [=========================================> ] 1.231GB/1.478GB
7b599a9816a8: Download complete
e89d9461ad8d: Download complete
2bb4a5f9dfe2: Downloading [===========================> ] 40.47MB/74.18MB
4c73be41da4a: Waiting
24f6a06e94e1: Pulling fs layer
07a924b46151: Pulling fs layer
26d1e862c567: Waiting
71ac7d9aaea3: Waiting
docker: invalid character 'c' looking for beginning of value.
See 'docker run --help'.

Operation error reporting

WSL2

$ cog run script/download-weights
Building Docker image from environment in cog.yaml...
[+] Building 14.0s (15/16)
 => [internal] load build definition from Dockerfile                                                               0.0s
 => => transferring dockerfile: 2.01kB                                                                             0.0s
 => [internal] load .dockerignore                                                                                  0.0s
 => => transferring context: 34B                                                                                   0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                                       0.0s
 => CACHED docker-image://docker.io/docker/dockerfile:1.2                                                          0.0s
 => [internal] load metadata for docker.io/nvidia/cuda:11.6.0-cudnn8-devel-ubuntu20.04                             0.0s
 => [stage-0  1/10] FROM docker.io/nvidia/cuda:11.6.0-cudnn8-devel-ubuntu20.04                                     0.0s
 => [internal] load build context                                                                                  0.1s
 => => transferring context: 40.63kB                                                                               0.1s
 => CACHED [stage-0  2/10] RUN rm -f /etc/apt/sources.list.d/cuda.list &&     rm -f /etc/apt/sources.list.d/nvidi  0.0s
 => CACHED [stage-0  3/10] RUN --mount=type=cache,target=/var/cache/apt set -eux; apt-get update -qq; apt-get ins  0.0s
 => CACHED [stage-0  4/10] RUN --mount=type=cache,target=/var/cache/apt apt-get update -qq && apt-get install -qq  0.0s
 => CACHED [stage-0  5/10] RUN curl -s -S -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/p  0.0s
 => CACHED [stage-0  6/10] COPY .cog/tmp/build2431266460/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-no  0.0s
 => CACHED [stage-0  7/10] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none  0.0s
 => CACHED [stage-0  8/10] COPY .cog/tmp/build2431266460/requirements.txt /tmp/requirements.txt                    0.0s
 => ERROR [stage-0  9/10] RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt     13.5s
------
 > [stage-0  9/10] RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt:
#15 1.134 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu116
#15 3.250 Collecting diffusers==0.11.1
#15 3.257   Using cached diffusers-0.11.1-py3-none-any.whl (524 kB)
#15 4.544 Collecting torch==1.13.0+cu116
#15 13.49 /root/.pyenv/pyenv.d/exec/pip-rehash/pip: line 20:    99 Killed                  "$PYENV_COMMAND_PATH" "$@"
------
executor failed running [/bin/sh -c pip install -r /tmp/requirements.txt]: exit code: 137
ⅹ Failed to build Docker image: exit status 1

No such file or directory: 'diffusers-cache/models--CompVis--stable-diffusion-safety-checker/snapshots/cb41f3a270d63d454d385fc2e4f571c487c253c5/config.json'

I get this error when running script/download-weights on the main branch from a fresh GitHub GPU Codespaces instance:

Running 'script/download-weights' in Docker with the current directory mounted as a volume...
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.55k/4.55k [00:00<00:00, 4.16MB/s]
Traceback (most recent call last):
  File "/src/script/download-weights", line 19, in <module>
    saftey_checker = StableDiffusionSafetyChecker.from_pretrained(
  File "/root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2012, in from_pretrained
    config, model_kwargs = cls.config_class.from_pretrained(
  File "/root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/configuration_utils.py", line 532, in from_pretrained
    config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/configuration_utils.py", line 559, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/configuration_utils.py", line 644, in _get_config_dict
    config_dict = cls._dict_from_json_file(resolved_config_file)
  File "/root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/configuration_utils.py", line 730, in _dict_from_json_file
    with open(json_file, "r", encoding="utf-8") as reader:
FileNotFoundError: [Errno 2] No such file or directory: 'diffusers-cache/models--CompVis--stable-diffusion-safety-checker/snapshots/cb41f3a270d63d454d385fc2e4f571c487c253c5/config.json'
ⅹ exit status 1

cc @replicate/models any ideas?

git tags for different SD versions

As there are major differences between stable diffusion 1.x and the 2.x series, there will be folks who will stick to 1.5 vs 2.0 or 2.1

Having tags link git commits to the stable diffusion version number would be useful

Deploying the cog as-is throws an error on startup

Hi,

I'm trying to deploy this cog to replicate, and was able to push it successfully using the cog push command

I haven't modified anything in the repo - deployed as it

When I try to run the model once its deployed, I get this error:

Loading pipeline...
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 326, in load_config
config_file = hf_hub_download(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/cog/server/worker.py", line 185, in _setup
run_setup(self._predictor)
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/cog/predictor.py", line 49, in run_setup
predictor.setup()
File "/src/predict.py", line 25, in setup
self.pipe = StableDiffusionPipeline.from_pretrained(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/pipeline_utils.py", line 459, in from_pretrained
config_dict = cls.load_config(
File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 354, in load_config
raise EnvironmentError(
OSError: stabilityai/stable-diffusion-2-1 does not appear to have a file named model_index.json.
Traceback (most recent call last):
  File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/cog/server/runner.py", line 291, in setup
    for event in worker.setup():
  File "/root/.pyenv/versions/3.10.11/lib/python3.10/site-packages/cog/server/worker.py", line 126, in _wait
    raise FatalWorkerException(raise_on_error + ": " + done.error_detail)
cog.server.exceptions.FatalWorkerException: Predictor errored during setup: stabilityai/stable-diffusion-2-1 does not appear to have a file named model_index.json.

Why is it running into this issue?

ValueError: operands could not be broadcast together with shapes (2,) (8,)

Version a9758cb this morning started to give me the following error for all img2img queries via the API:

Running predict()...
Using seed: 36432201
Traceback (most recent call last):
File "/src/src/cog/python/cog/server/worker.py", line 209, in _predict
result = self._predictor.predict(**payload)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
return func(*args, **kwargs)
File "/src/predict.py", line 107, in predict
output = self.pipe(
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/src/image_to_image.py", line 112, in __call__
self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/schedulers/scheduling_pndm.py", line 115, in set_timesteps
prk_timesteps = np.array(self._timesteps[-self.pndm_order :]).repeat(2) + np.tile(
ValueError: operands could not be broadcast together with shapes (2,) (8,)

Note: I can only repro this when using API calls. I can't get it to happen via the Replicate web UI. Example parameters:

  num_outputs: 1,
  num_inference_steps: 1,
  guidance_scale: 15,
  prompt_strength: 0.7,
  init_image: 'https://sdui-staging.imgix.net/uploads/cl9hefbnr00743r44eyfhoox7/file-bddcaaabd0334f6fb363f6c058103fa97d41c715-undefined?w=1024&h=1024',
  prompt: 'painting of a close up of a black dog on a leash',
  seed: 238206034

"cog run script/download-weights" fails to download for me. says exit code 137. How can i fix this? (i am very new to this so any help would be great)

@lleopard1704 ➜ /workspaces/cog-stable-diffusion (main) $ cog run script/download-weights
Building Docker image from environment in cog.yaml...
[+] Building 80.7s (16/17)
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 1.89kB 0.0s
=> [internal] load .dockerignore 0.2s
=> => transferring context: 34B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1.2 1.6s
=> [auth] docker/dockerfile:pull token for registry-1.docker.io 0.0s
=> CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc 0.0s
=> [internal] load metadata for docker.io/nvidia/cuda:11.6.0-cudnn8-devel-ubuntu20.04 1.0s
=> [auth] nvidia/cuda:pull token for registry-1.docker.io 0.0s
=> [stage-0 1/9] FROM docker.io/nvidia/cuda:11.6.0-cudnn8-devel-ubuntu20.04@sha256:6a4ef3d0032001ab91e0e6ecc27ebf59dd122a531703de8f64cc84 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 42.24kB 0.0s
=> CACHED [stage-0 2/9] RUN --mount=type=cache,target=/var/cache/apt set -eux; apt-get update -qq; apt-get install -qqy --no-install-reco 0.0s
=> CACHED [stage-0 3/9] RUN --mount=type=cache,target=/var/cache/apt apt-get update -qq && apt-get install -qqy --no-install-recommends 0.0s
=> CACHED [stage-0 4/9] RUN curl -s -S -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bash && g 0.0s
=> CACHED [stage-0 5/9] COPY .cog/tmp/build1717844414/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none-any.whl 0.0s
=> CACHED [stage-0 6/9] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none-any.whl 0.0s
=> CACHED [stage-0 7/9] COPY .cog/tmp/build1717844414/requirements.txt /tmp/requirements.txt 0.0s
=> ERROR [stage-0 8/9] RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt 76.5s

[stage-0 8/9] RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt:
#16 4.933 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu116
#16 6.026 Collecting diffusers==0.11.1
#16 6.059 Using cached diffusers-0.11.1-py3-none-any.whl (524 kB)
#16 7.201 Collecting torch==1.13.0+cu116
#16 7.208 Downloading https://download.pytorch.org/whl/cu116/torch-1.13.0%2Bcu116-cp310-cp310-linux_x86_64.whl (1983.0 MB)
#16 74.74 /root/.pyenv/pyenv.d/exec/pip-rehash/pip: line 20: 84 Killed "$PYENV_COMMAND_PATH" "$@"


executor failed running [/bin/sh -c pip install -r /tmp/requirements.txt]: exit code: 137
ⅹ Failed to build Docker image: exit status 1

Always outputing 1 image when selecting 4

I know it says "Number of images to output. NSFW filter in enabled, so you may get fewer outputs than requested if flagged" but I'm getting 1 image only consistently, always 3 images flagged as NSFW. Even with prompts such as "all green" checkboard pattern".

When I do the default prompt "multicolor hyperspace" it always works with 1 image, but no images are produced if I select 4.

Test suite

How can we test this model before we push it?

"cog run script/download-weights token" result with "No such file or directory"

After running cog run script/download-weights <your-hugging-face-auth-token> with hugging face token, the result as fallow:

(base) yellow@red:/mnt/c/cog/cog-stable-diffusion$ cog run script/download-weights ✨token✨
⚠ Cog doesn't know if CUDA 11.6.2 is compatible with PyTorch 1.12.1 --extra-index-url=https://download.pytorch.org/whl/cu116. This might cause CUDA problems.
Building Docker image from environment in cog.yaml...
[+] Building 17.8s (16/16) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                   0.0s
 => => transferring dockerfile: 1.68kB                                                                                                                 0.0s
 => [internal] load .dockerignore                                                                                                                      0.0s
 => => transferring context: 2B                                                                                                                        0.0s
 => resolve image config for docker.io/docker/dockerfile:1.2                                                                                          16.0s
 => CACHED docker-image://docker.io/docker/dockerfile:1.2@sha256:e2a8561e419ab1ba6b2fe6cbdf49fd92b95912df1cf7d313c3e2230a333fdbcc                      0.0s
 => [internal] load metadata for docker.io/nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04                                                                 1.3s
 => [stage-0 1/8] FROM docker.io/nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04@sha256:55211df43bf393d3393559d5ab53283d4ebc3943d802b04546a24f3345825bd9   0.0s
 => [internal] load build context                                                                                                                      0.1s
 => => transferring context: 31.63kB                                                                                                                   0.0s
 => CACHED [stage-0 2/8] RUN rm -f /etc/apt/sources.list.d/cuda.list &&     rm -f /etc/apt/sources.list.d/nvidia-ml.list &&     apt-key del 7fa2af80   0.0s
 => CACHED [stage-0 3/8] RUN --mount=type=cache,target=/var/cache/apt apt-get update -qq && apt-get install -qqy --no-install-recommends  make  build  0.0s
 => CACHED [stage-0 4/8] RUN curl -s -S -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bash &&  git clone ht  0.0s
 => CACHED [stage-0 5/8] COPY .cog/tmp/build3151131001/cog-0.0.1.dev-py3-none-any.whl /tmp/cog-0.0.1.dev-py3-none-any.whl                              0.0s
 => CACHED [stage-0 6/8] RUN --mount=type=cache,target=/root/.cache/pip pip install /tmp/cog-0.0.1.dev-py3-none-any.whl                                0.0s
 => CACHED [stage-0 7/8] RUN --mount=type=cache,target=/root/.cache/pip pip install   diffusers==0.2.4 torch==1.12.1 --extra-index-url=https://downlo  0.0s
 => CACHED [stage-0 8/8] WORKDIR /src                                                                                                                  0.0s
 => exporting to image                                                                                                                                 0.0s
 => => exporting layers                                                                                                                                0.0s
 => => writing image sha256:12d01ad8e7f6ec81aa8f94935ad22e894dda9ffceb15a93720df79ba1dedff11                                                           0.0s
 => => naming to docker.io/library/cog-cog-stable-diffusion-base                                                                                       0.0s
 => exporting cache                                                                                                                                    0.0s
 => => preparing build cache for export                                                                                                                0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

Running 'script/download-weights ✨token✨ in Docker with the current directory mounted as a volume...
/usr/bin/env: 'python\r': No such file or directory
ⅹ exit status 127

Is it possible to download the model manually?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.