Giter VIP home page Giter VIP logo

Comments (10)

continue-revolution avatar continue-revolution commented on September 15, 2024 1

#88 vlad has suppored the hook I used in my extension. So I will close this.

from sd-webui-animatediff.

continue-revolution avatar continue-revolution commented on September 15, 2024

Please post the terminal error here.

However, since I do not use vlad fork, I cannot guarantee whether/when I will support it.

from sd-webui-animatediff.

houseofsecrets avatar houseofsecrets commented on September 15, 2024

I am also having problems with the Vlad version.
Everytime I startup the webui the first time I generate it gives no errors but it will only generate one image and make that into a gif.

`2023-07-20T07:30:05.945Z INFO AnimateDiff animatediff Removing motion module from SD1.5 UNet input blocks.
2023-07-20T07:30:05.947Z INFO AnimateDiff animatediff Removing motion module from SD1.5 UNet output blocks.
2023-07-20T07:30:05.949Z INFO AnimateDiff animatediff Removal finished.
2023-07-20T07:30:05.950Z INFO AnimateDiff animatediff Merging images into GIF.
2023-07-20T07:30:06.011Z INFO AnimateDiff animatediff AnimateDiff process end.`

After that it will fail and gives this error

`2023-07-20T07:30:27.617Z ERROR sd call_queue Exception: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [2, 5120, 8, 8]
2023-07-20T07:30:27.620Z ERROR sd call_queue Arguments: args=('task(ajde2u1gtzlohin)', 'a man standing in a field', '', [], 24, 4, 0, False, False, 1, 1, 7, 6, 0.7, 1, -1.0, -1.0, 0, 0, 0, 512, 512, False, 0.7, 2, 'Latent', 20, 0, 0, 0.5, 1, '', '', [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, 'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate', 'animation', , , , , False, True, False, 0, -1, True, 0, 16, 8, 'mm_sd_v15.ckpt', False, False, 'Matrix', 'Horizontal', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, False, False, 'positive', 'comma', 0, False, False, '', 7, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, False, 4.0, '', 10.0, 'Linear', 3, False, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 0.001, 75, 0.0, False, True) kwargs={}
2023-07-20T07:30:27.646Z ERROR sd errors gradio call: RuntimeError`

It will give this error for everything I do in the webui until I restart.

from sd-webui-animatediff.

continue-revolution avatar continue-revolution commented on September 15, 2024

@houseofsecrets does vlad print any sort of stack trace, like error at this line then at that line?

from sd-webui-animatediff.

houseofsecrets avatar houseofsecrets commented on September 15, 2024

This is what the cmd window prints after the first generation (where there is only 1 image) and the second time when it breaks.
100%|██████████████████████████████████████████████████████████████████████████████████| 26/26 [00:01<00:00, 14.15it/s] 10:27:15-027166 INFO Removing motion module from SD1.5 UNet input blocks. 10:27:15-029167 INFO Removing motion module from SD1.5 UNet output blocks. 10:27:15-031167 INFO Removal finished. 10:27:15-032168 INFO Merging images into GIF. 10:27:15-090174 INFO AnimateDiff process end. 0%| | 0/26 [00:00<?, ?it/s] 10:27:20-097635 ERROR Exception: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [2, 5120, 8, 8] 10:27:20-099635 ERROR Arguments: args=('task(p1yn71p5ougxy94)', 'a man', '', [], 26, 4, 0, False, False, 1, 1, 7, 6, 0.7, 1, -1.0, -1.0, 0, 0, 0, 512, 512, False, 0.7, 2, 'Latent', 20, 0, 0, 0.5, 1, '', '', [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, 'x264', 'blend', 10, 0, 0, False, True, True, True, 'intermediate', 'animation', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001D5B5A0B850>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001D5B44AFFA0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001D5B63FC2E0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001D587330DF0>, False, True, False, 0, -1, True, 0, 16, 8, 'mm_sd_v15.ckpt', False, False, 'Matrix', 'Horizontal', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, False, False, 'positive', 'comma', 0, False, False, '', 7, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, False, 4.0, '', 10.0, 'Linear', 3, False, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 0.001, 75, 0.0, False, True) kwargs={} 10:27:20-132643 ERROR gradio call: RuntimeError ┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐ │ C:\WorkingFiles\ML\StableDiffusion\GitRepos\automatic\modules\call_queue.py:34 in f │ │ │ │ 33 │ │ │ try: │ │ > 34 │ │ │ │ res = func(*args, **kwargs) │ │ 35 │ │ │ │ progress.record_results(id_task, res) │ │ │ │ C:\WorkingFiles\ML\StableDiffusion\GitRepos\automatic\modules\txt2img.py:65 in txt2img │ │ │ │ 64 │ if processed is None: │ │ > 65 │ │ processed = processing.process_images(p) │ │ 66 │ p.close() │ │ │ │ ... 35 frames hidden ... │ │ │ │ C:\WorkingFiles\ML\StableDiffusion\GitRepos\automatic\venv\lib\site-packages\torch\nn\modules\normalization.py:273 │ │ in forward │ │ │ │ 272 │ def forward(self, input: Tensor) -> Tensor: │ │ > 273 │ │ return F.group_norm( │ │ 274 │ │ │ input, self.num_groups, self.weight, self.bias, self.eps) │ │ │ │ C:\WorkingFiles\ML\StableDiffusion\GitRepos\automatic\venv\lib\site-packages\torch\nn\functional.py:2530 in │ │ group_norm │ │ │ │ 2529 │ _verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list( │ │ > 2530 │ return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.e │ │ 2531 │ └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [2, 5120, 8, 8]

from sd-webui-animatediff.

continue-revolution avatar continue-revolution commented on September 15, 2024

@houseofsecrets Seems like it’s the same bug as #3. Will look into it tomorrow. Could you share a screenshot of your webui which generated this error?

from sd-webui-animatediff.

DanielBelokon avatar DanielBelokon commented on September 15, 2024

I think there might not be a before_process hook in vlad's repo so it's simply not being called the first time, then it gets "removed" and causes the error. Just a guess, but even if I swap before_process with process I get the

RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [2, 5120, 8, 8]

Error, and sometimes just get noise. Can't reliable reproduce either

from sd-webui-animatediff.

houseofsecrets avatar houseofsecrets commented on September 15, 2024

@houseofsecrets Seems like it’s the same bug as #3. Will look into it tomorrow. Could you share a screenshot of your webui which generated this error?

Something like this?
WebUiScreenshot

from sd-webui-animatediff.

H1ghSyst3m avatar H1ghSyst3m commented on September 15, 2024

Hi, here also my cmd output:

`2023-07-21 21:16:58,269 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 10, FPS 6, duration 1.6666666666666667, motion module mm_sd_v15.ckpt.
*** Error running before_process: C:\AI\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
Traceback (most recent call last):
File "C:\AI\stable-diffusion-webui\modules\scripts.py", line 466, in before_process
script.before_process(p, *script_args)
File "C:\AI\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 116, in before_process
self.inject_motion_modules(p, model)
File "C:\AI\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 73, in inject_motion_modules
raise RuntimeError("Please download models manually.")
RuntimeError: Please download models manually.


0%| | 0/14 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(10h5mccsmmt3168)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=1500x1500 at 0x191300CBA30>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.69, -1.0, -1.0, 0, 0, 0, False, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 0, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_conf': 30, 'ad_dilate_erode': 32, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_full_res': True, 'ad_inpaint_full_res_padding': 0, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_conf': 30, 'ad_dilate_erode': 32, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_full_res': True, 'ad_inpaint_full_res_padding': 0, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, False, '', 0, False, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', True, 0, 10, 6, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001913005D2A0>, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Bilinear', False, 'Lerp', '', '', False, False, None, True, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, 50, 0, 0, 512, 512, False, True, False, False, 0, 1, False, 1, True, True, False, False, ['left-right', 'red-cyan-anaglyph'], 2.5, 'polylines_sharp', 0, False, False, False, False, False, False, 'u2net', False, True, False, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "C:\AI\stable-diffusion-webui\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "C:\AI\stable-diffusion-webui\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "C:\AI\stable-diffusion-webui\modules\img2img.py", line 198, in img2img
processed = process_images(p)
File "C:\AI\stable-diffusion-webui\modules\processing.py", line 620, in process_images
res = process_images_inner(p)
File "C:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\AI\stable-diffusion-webui\modules\processing.py", line 739, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\AI\stable-diffusion-webui\modules\processing.py", line 1316, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 409, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 278, in launch_sampling
return func()
File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 409, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 158, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\AI\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\AI\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 19, in mm_tes_forward
x = layer(x, emb)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward
return checkpoint(
File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 262, in _forward
h = self.in_layers(x)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "C:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 226, in forward
return super().forward(x.float()).type(x.dtype)
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2528, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [20, 5120, 8, 8]


`

from sd-webui-animatediff.

TiagoSantos81 avatar TiagoSantos81 commented on September 15, 2024

I am testing on Vladmantic fork and this problem goes away when batch size is set to a batch number higher than 1.
The readme mentions that the batch number should be equal to the number of frames to output.

from sd-webui-animatediff.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.