Giter VIP home page Giter VIP logo

Comments (11)

M0hammadL avatar M0hammadL commented on June 2, 2024

Hi @Lin-K76, are you running the keras or tf version ?

from scgen.

Lin-K76 avatar Lin-K76 commented on June 2, 2024

Hi @M0hammadL, I'm not fully sure, sorry! I'm still fairly new to using python, but when I run the code it says I'm using the TensorFlow backend? I created a conda environment in python 3.6 with pip, then used pip to install scgen 1.1.5, and then ran the example code in PyCharm in the created conda environment. Did I answer what you were referring to, or is there a way I can find out? Thank you!

from scgen.

XiajieWei avatar XiajieWei commented on June 2, 2024

Hi, I got the same problem as described above, and I use the Keras version(the VAEArithKeras function) as write in the example. My results were just as same as in this issue. Could you please give us a more detail answer? @M0hammadL

from scgen.

TJCooperIL avatar TJCooperIL commented on June 2, 2024

I am also facing this issue - unable to replicate the results in the example. Following code exactly as provided here:

https://nbviewer.jupyter.org/github/M0hammadL/scGen_notebooks/blob/master/notebooks/scgen_kang.ipynb

Python 3.7.0 (conda environment on Ubuntu 20.04)
scGen 1.1.5
Tensorflow 1.15.0
Keras 2.3.0

from scgen.

TJCooperIL avatar TJCooperIL commented on June 2, 2024

I attach the code used and the output figures.

Screenshot-20201115075259-635x442
Screenshot-20201115075248-546x402

>>> import scgen
Using TensorFlow backend.
>>> import scanpy as sc
>>> train = sc.read("./tests/data/train_kang.h5ad",
...                 backup_url="https://goo.gl/33HtVh")
/home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/anndata/compat/__init__.py:161: FutureWarning: Moving element from .uns['neighbors']['distances'] to .obsp['distances'].

This is where adjacency matrices should go now.
  FutureWarning,
/home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/anndata/compat/__init__.py:161: FutureWarning: Moving element from .uns['neighbors']['connectivities'] to .obsp['connectivities'].

This is where adjacency matrices should go now.
  FutureWarning,
>>> train_new = train[~((train.obs["cell_type"] == "CD4T") &
...                     (train.obs["condition"] == "stimulated"))]
/home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/anndata/_core/anndata.py:1094: FutureWarning: is_categorical is deprecated and will be removed in a future version.  Use is_categorical_dtype instead
  if not is_categorical(df_full[k]):
>>> scg = scgen.VAEArithKeras(x_dimension= train.shape[1], model_path="./models/test")
WARNING:tensorflow:From /home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

Model: "VAE"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input (InputLayer)           (None, 6998)              0         
_________________________________________________________________
encoder (Model)              (None, 100)               6405000   
_________________________________________________________________
decoder (Model)              (None, 6998)              6331798   
=================================================================
Total params: 12,736,798
Trainable params: 12,730,398
Non-trainable params: 6,400
_________________________________________________________________
>>> 
>>> scg.train(train_data=train_new, n_epochs=100)
2020-11-14 21:26:11.609471: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-11-14 21:26:11.632980: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1999965000 Hz
2020-11-14 21:26:11.633527: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x562c06147ec0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-14 21:26:11.633564: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
WARNING:tensorflow:From /home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

Epoch 1/100
13766/13766 [==============================] - 46s 3ms/step - loss: 126.4641 - kl_loss: 801.6661 - recon_loss: 126.4240
Epoch 2/100
13766/13766 [==============================] - 45s 3ms/step - loss: 88.4356 - kl_loss: 805.0673 - recon_loss: 88.3954
Epoch 3/100
13766/13766 [==============================] - 44s 3ms/step - loss: 83.6606 - kl_loss: 748.9107 - recon_loss: 83.6231
Epoch 4/100
13766/13766 [==============================] - 44s 3ms/step - loss: 81.7732 - kl_loss: 746.5737 - recon_loss: 81.7358
Epoch 5/100
13766/13766 [==============================] - 44s 3ms/step - loss: 80.2744 - kl_loss: 777.0538 - recon_loss: 80.2356
Epoch 6/100
13766/13766 [==============================] - 44s 3ms/step - loss: 78.7137 - kl_loss: 789.5402 - recon_loss: 78.6742
Epoch 7/100
13766/13766 [==============================] - 44s 3ms/step - loss: 77.3103 - kl_loss: 808.4277 - recon_loss: 77.2699
Epoch 8/100
13766/13766 [==============================] - 44s 3ms/step - loss: 75.9800 - kl_loss: 830.5782 - recon_loss: 75.9384
Epoch 9/100
13766/13766 [==============================] - 44s 3ms/step - loss: 75.1623 - kl_loss: 852.6603 - recon_loss: 75.1197
Epoch 10/100
13766/13766 [==============================] - 44s 3ms/step - loss: 74.5641 - kl_loss: 879.9400 - recon_loss: 74.5202
Epoch 11/100
13766/13766 [==============================] - 44s 3ms/step - loss: 74.1038 - kl_loss: 905.4819 - recon_loss: 74.0585
Epoch 12/100
13766/13766 [==============================] - 44s 3ms/step - loss: 73.6904 - kl_loss: 920.5688 - recon_loss: 73.6444
Epoch 13/100
13766/13766 [==============================] - 44s 3ms/step - loss: 73.4180 - kl_loss: 948.1720 - recon_loss: 73.3706
Epoch 14/100
13766/13766 [==============================] - 44s 3ms/step - loss: 73.1101 - kl_loss: 964.0318 - recon_loss: 73.0619
Epoch 15/100
13766/13766 [==============================] - 44s 3ms/step - loss: 72.9313 - kl_loss: 981.3867 - recon_loss: 72.8822
Epoch 16/100
13766/13766 [==============================] - 44s 3ms/step - loss: 72.6190 - kl_loss: 994.0466 - recon_loss: 72.5693
Epoch 17/100
13766/13766 [==============================] - 44s 3ms/step - loss: 72.4483 - kl_loss: 1008.4599 - recon_loss: 72.3978
Epoch 18/100
13766/13766 [==============================] - 45s 3ms/step - loss: 72.2098 - kl_loss: 1026.1659 - recon_loss: 72.1585
Epoch 19/100
13766/13766 [==============================] - 44s 3ms/step - loss: 71.9754 - kl_loss: 1020.9984 - recon_loss: 71.9244
Epoch 20/100
13766/13766 [==============================] - 44s 3ms/step - loss: 71.8733 - kl_loss: 1031.5131 - recon_loss: 71.8217
Epoch 21/100
13766/13766 [==============================] - 44s 3ms/step - loss: 71.7171 - kl_loss: 1034.2878 - recon_loss: 71.6654
Epoch 22/100
13766/13766 [==============================] - 43s 3ms/step - loss: 71.6259 - kl_loss: 1060.0225 - recon_loss: 71.5729
Epoch 23/100
13766/13766 [==============================] - 43s 3ms/step - loss: 71.4013 - kl_loss: 1052.9657 - recon_loss: 71.3486
Epoch 24/100
13766/13766 [==============================] - 43s 3ms/step - loss: 71.2750 - kl_loss: 1049.8899 - recon_loss: 71.2225
Epoch 25/100
13766/13766 [==============================] - 43s 3ms/step - loss: 71.1771 - kl_loss: 1050.6429 - recon_loss: 71.1246
Epoch 26/100
13766/13766 [==============================] - 43s 3ms/step - loss: 71.0753 - kl_loss: 1058.3256 - recon_loss: 71.0224
Epoch 27/100
13766/13766 [==============================] - 43s 3ms/step - loss: 70.9526 - kl_loss: 1057.8673 - recon_loss: 70.8997
Epoch 28/100
13766/13766 [==============================] - 43s 3ms/step - loss: 70.8255 - kl_loss: 1053.5286 - recon_loss: 70.7728
Epoch 29/100
13766/13766 [==============================] - 43s 3ms/step - loss: 70.7289 - kl_loss: 1052.2469 - recon_loss: 70.6763
Epoch 30/100
13766/13766 [==============================] - 42s 3ms/step - loss: 70.5949 - kl_loss: 1056.4933 - recon_loss: 70.5421
Epoch 31/100
13766/13766 [==============================] - 42s 3ms/step - loss: 70.5669 - kl_loss: 1062.1147 - recon_loss: 70.5138
Epoch 32/100
13766/13766 [==============================] - 42s 3ms/step - loss: 70.3930 - kl_loss: 1042.0842 - recon_loss: 70.3409
Epoch 33/100
13766/13766 [==============================] - 42s 3ms/step - loss: 70.3400 - kl_loss: 1043.8483 - recon_loss: 70.2878
Epoch 34/100
13766/13766 [==============================] - 42s 3ms/step - loss: 70.3002 - kl_loss: 1037.1196 - recon_loss: 70.2483
Epoch 35/100
13766/13766 [==============================] - 42s 3ms/step - loss: 70.1894 - kl_loss: 1041.5201 - recon_loss: 70.1373
Epoch 36/100
13766/13766 [==============================] - 43s 3ms/step - loss: 70.1497 - kl_loss: 1039.9360 - recon_loss: 70.0977
Epoch 37/100
13766/13766 [==============================] - 43s 3ms/step - loss: 70.0593 - kl_loss: 1036.5314 - recon_loss: 70.0074
Epoch 38/100
13766/13766 [==============================] - 43s 3ms/step - loss: 70.0483 - kl_loss: 1053.4260 - recon_loss: 69.9957
Epoch 39/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.9386 - kl_loss: 1041.9634 - recon_loss: 69.8866
Epoch 40/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.8291 - kl_loss: 1034.5082 - recon_loss: 69.7773
Epoch 41/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.6953 - kl_loss: 1020.5195 - recon_loss: 69.6442
Epoch 42/100
13766/13766 [==============================] - 42s 3ms/step - loss: 69.7366 - kl_loss: 1042.7677 - recon_loss: 69.6844
Epoch 43/100
13766/13766 [==============================] - 42s 3ms/step - loss: 69.6620 - kl_loss: 1039.7211 - recon_loss: 69.6100
Epoch 44/100
13766/13766 [==============================] - 42s 3ms/step - loss: 69.5757 - kl_loss: 1034.1422 - recon_loss: 69.5240
Epoch 45/100
13766/13766 [==============================] - 42s 3ms/step - loss: 69.5017 - kl_loss: 1027.2109 - recon_loss: 69.4504
Epoch 46/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.4628 - kl_loss: 1019.2023 - recon_loss: 69.4118
Epoch 47/100
13766/13766 [==============================] - 42s 3ms/step - loss: 69.3145 - kl_loss: 1010.5093 - recon_loss: 69.2640
Epoch 48/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.3807 - kl_loss: 1015.8851 - recon_loss: 69.3299
Epoch 49/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.2551 - kl_loss: 1006.7145 - recon_loss: 69.2048
Epoch 50/100
13766/13766 [==============================] - 42s 3ms/step - loss: 69.3209 - kl_loss: 1005.5969 - recon_loss: 69.2706
Epoch 51/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.1782 - kl_loss: 998.3293 - recon_loss: 69.1283
Epoch 52/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.0831 - kl_loss: 989.8683 - recon_loss: 69.0336
Epoch 53/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.0481 - kl_loss: 988.1567 - recon_loss: 68.9988
Epoch 54/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.0571 - kl_loss: 983.7333 - recon_loss: 69.0079
Epoch 55/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.0076 - kl_loss: 977.9775 - recon_loss: 68.9587
Epoch 56/100
13766/13766 [==============================] - 43s 3ms/step - loss: 69.0085 - kl_loss: 980.2468 - recon_loss: 68.9595
Epoch 57/100
13766/13766 [==============================] - 42s 3ms/step - loss: 68.9510 - kl_loss: 976.8846 - recon_loss: 68.9021
Epoch 58/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.9216 - kl_loss: 973.2021 - recon_loss: 68.8730
Epoch 59/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.8909 - kl_loss: 967.8777 - recon_loss: 68.8425
Epoch 60/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.8117 - kl_loss: 967.2634 - recon_loss: 68.7633
Epoch 61/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.7814 - kl_loss: 965.2949 - recon_loss: 68.7331
Epoch 62/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.6905 - kl_loss: 962.2551 - recon_loss: 68.6424
Epoch 63/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.7476 - kl_loss: 956.8500 - recon_loss: 68.6998
Epoch 64/100
13766/13766 [==============================] - 42s 3ms/step - loss: 68.6474 - kl_loss: 948.6895 - recon_loss: 68.6000
Epoch 65/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.6169 - kl_loss: 947.2051 - recon_loss: 68.5695
Epoch 66/100
13766/13766 [==============================] - 42s 3ms/step - loss: 68.5796 - kl_loss: 946.0397 - recon_loss: 68.5322
Epoch 67/100
13766/13766 [==============================] - 42s 3ms/step - loss: 68.5449 - kl_loss: 943.7023 - recon_loss: 68.4977
Epoch 68/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.6028 - kl_loss: 949.8510 - recon_loss: 68.5554
Epoch 69/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.4687 - kl_loss: 941.6682 - recon_loss: 68.4216
Epoch 70/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.4693 - kl_loss: 929.8727 - recon_loss: 68.4228
Epoch 71/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.4538 - kl_loss: 926.8280 - recon_loss: 68.4075
Epoch 72/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.3739 - kl_loss: 923.9222 - recon_loss: 68.3277
Epoch 73/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.3629 - kl_loss: 923.2612 - recon_loss: 68.3167
Epoch 74/100
13766/13766 [==============================] - 42s 3ms/step - loss: 68.3243 - kl_loss: 921.2681 - recon_loss: 68.2783
Epoch 75/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.3098 - kl_loss: 917.1918 - recon_loss: 68.2639
Epoch 76/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.2856 - kl_loss: 910.3600 - recon_loss: 68.2401
Epoch 77/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.2763 - kl_loss: 913.8910 - recon_loss: 68.2306
Epoch 78/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.1954 - kl_loss: 901.7438 - recon_loss: 68.1503
Epoch 79/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.1283 - kl_loss: 895.5449 - recon_loss: 68.0835
Epoch 80/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.1955 - kl_loss: 895.3928 - recon_loss: 68.1507
Epoch 81/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.1921 - kl_loss: 897.9836 - recon_loss: 68.1472
Epoch 82/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.0974 - kl_loss: 886.1795 - recon_loss: 68.0531
Epoch 83/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.1213 - kl_loss: 891.3246 - recon_loss: 68.0767
Epoch 84/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.0690 - kl_loss: 886.0167 - recon_loss: 68.0247
Epoch 85/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.0875 - kl_loss: 881.3613 - recon_loss: 68.0435
Epoch 86/100
13766/13766 [==============================] - 42s 3ms/step - loss: 67.9897 - kl_loss: 874.9940 - recon_loss: 67.9460
Epoch 87/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.9163 - kl_loss: 877.1234 - recon_loss: 67.8724
Epoch 88/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.9386 - kl_loss: 872.8942 - recon_loss: 67.8950
Epoch 89/100
13766/13766 [==============================] - 43s 3ms/step - loss: 68.0117 - kl_loss: 867.8660 - recon_loss: 67.9683
Epoch 90/100
13766/13766 [==============================] - 42s 3ms/step - loss: 67.9798 - kl_loss: 870.5614 - recon_loss: 67.9363
Epoch 91/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.8814 - kl_loss: 865.1857 - recon_loss: 67.8381
Epoch 92/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.8388 - kl_loss: 857.7515 - recon_loss: 67.7959
Epoch 93/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.8658 - kl_loss: 856.6883 - recon_loss: 67.8230
Epoch 94/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.8085 - kl_loss: 851.3347 - recon_loss: 67.7659
Epoch 95/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.8423 - kl_loss: 847.4927 - recon_loss: 67.7999
Epoch 96/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.7853 - kl_loss: 849.5745 - recon_loss: 67.7428
Epoch 97/100
13766/13766 [==============================] - 42s 3ms/step - loss: 67.6945 - kl_loss: 843.2720 - recon_loss: 67.6523
Epoch 98/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.7053 - kl_loss: 839.2589 - recon_loss: 67.6633
Epoch 99/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.7564 - kl_loss: 837.5795 - recon_loss: 67.7145
Epoch 100/100
13766/13766 [==============================] - 43s 3ms/step - loss: 67.6888 - kl_loss: 834.1238 - recon_loss: 67.6471
<keras.callbacks.callbacks.History object at 0x7f3b514472e8>
>>> unperturbed_cd4t = train[((train.obs["cell_type"] == "CD4T") & (train.obs["condition"] == "control"))]
/home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/anndata/_core/anndata.py:1094: FutureWarning: is_categorical is deprecated and will be removed in a future version.  Use is_categorical_dtype instead
  if not is_categorical(df_full[k]):
>>> pred, delta = scg.predict(adata=train_new, adata_to_predict=unperturbed_cd4t,
...                           conditions={"ctrl": "control", "stim": "stimulated"}, cell_type_key="cell_type", condition_key="condition")
>>> pred_adata = sc.AnnData(pred, obs={"condition":["pred"]*len(pred)}, var={"var_names":train.var_names})
>>> CD4T = train[train.obs["cell_type"] =="CD4T"]
>>> all_adata = CD4T.concatenate(pred_adata)
>>> sc.tl.pca(all_adata)
>>> sc.pl.pca(all_adata, color="condition", frameon=False)
/home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/anndata/_core/anndata.py:1192: FutureWarning: is_categorical is deprecated and will be removed in a future version.  Use is_categorical_dtype instead
  if is_string_dtype(df[key]) and not is_categorical(df[key])
... storing 'condition' as categorical
... storing 'cell_type' as categorical
findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans.
findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans.
>>> sc.pl.violin(all_adata, keys="ISG15", groupby="condition")
/home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
  FutureWarning
/home/tjcooper/anaconda3/envs/scgen_env/lib/python3.7/site-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
  FutureWarning

from scgen.

TJCooperIL avatar TJCooperIL commented on June 2, 2024

I'm able to sufficiently replicate the result of the batch-correction vignette - just not the Kang data vignette. Any ideas?

from scgen.

TJCooperIL avatar TJCooperIL commented on June 2, 2024

Despite what the vignette states, do not use the Keras VAE. Use the scgen.VAEArith function and you will be able to replicate the vignette data. All other code can remain the same.

@M0hammadL please update the vignette code accordingly.

from scgen.

M0hammadL avatar M0hammadL commented on June 2, 2024

Despite what the vignette states, do not use the Keras VAE. Use the scgen.VAEArith function and you will be able to replicate the vignette data. All other code can remain the same.

@M0hammadL please update the vignette code accordingly.

for kang ? or for other one ?
we will release a tf.2 version soon

from scgen.

TJCooperIL avatar TJCooperIL commented on June 2, 2024

@M0hammadL For kang. The other one seems okay and it's reproducible using the current vignette code.

from scgen.

Lin-K76 avatar Lin-K76 commented on June 2, 2024

Thank you so much both of you for your help! I will try using the scgen.VAEArith rather than Keras VAE to replicate the data. Do you know why they would behave differently? And for our own data, I guess it would be best to run both and take it from there.

from scgen.

M0hammadL avatar M0hammadL commented on June 2, 2024

hi all

We have released a new pytorch version of the code. Please use this version.

Best
Mo

from scgen.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.