Giter VIP home page Giter VIP logo

tensorlayerx's Introduction

GitHub last commit (branch) Supported TF Version Documentation Status Build Status Downloads Downloads Docker Pulls Codacy Badge

Please click TensorLayerX 🔥🔥🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass tutorials and applications. TensorLayer is awarded the 2017 Best Open Source Software by the ACM Multimedia Society. This project can also be found at OpenI and Gitee.

News

  • 🔥 TensorLayerX is a Unified Deep Learning and Reinforcement Learning Framework for All Hardwares, Backends and OS. The current version supports TensorFlow, Pytorch, MindSpore, PaddlePaddle, OneFlow and Jittor as the backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend.
  • TensorLayer is now in OpenI
  • Reinforcement Learning Zoo: Low-level APIs for professional usage, High-level APIs for simple usage, and a corresponding Springer textbook
  • Sipeed Maxi-EMC: Run TensorLayer models on the low-cost AI chip (e.g., K210) (Alpha Version)

Design Features

TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.

  • Simplicity : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive examples.
  • Flexibility : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
  • Zero-cost Abstraction : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).

TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic, making it easy to learn while being flexible enough to cope with complex AI tasks. TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University, Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.

Multilingual Documents

TensorLayer has extensive documentation for both beginners and professionals. The documentation is available in both English and Chinese.

English Documentation Chinese Documentation Chinese Book

If you want to try the experimental features on the the master branch, you can find the latest document here.

Extensive Examples

You can find a large collection of examples that use TensorLayer in here and the following space:

Getting Start

TensorLayer 2.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required.

Install TensorFlow:

pip3 install tensorflow-gpu==2.0.0-rc1 # TensorFlow GPU (version 2.0 RC1)
pip3 install tensorflow # CPU version

Install the stable release of TensorLayer:

pip3 install tensorlayer

Install the unstable development version of TensorLayer:

pip3 install git+https://github.com/tensorlayer/tensorlayer.git

If you want to install the additional dependencies, you can also run

pip3 install --upgrade tensorlayer[all]              # all additional dependencies
pip3 install --upgrade tensorlayer[extra]            # only the `extra` dependencies
pip3 install --upgrade tensorlayer[contrib_loggers]  # only the `contrib_loggers` dependencies

If you are TensorFlow 1.X users, you can use TensorLayer 1.11.0:

# For last stable version of TensorLayer 1.X
pip3 install --upgrade tensorlayer==1.11.0

Performance Benchmark

The following table shows the training speeds of VGG16 using TensorLayer and native TensorFlow on a TITAN Xp.

Mode Lib Data Format Max GPU Memory Usage(MB) Max CPU Memory Usage(MB) Avg CPU Memory Usage(MB) Runtime (sec)
AutoGraph TensorFlow 2.0 channel last 11833 2161 2136 74
TensorLayer 2.0 channel last 11833 2187 2169 76
Graph Keras channel last 8677 2580 2576 101
Eager TensorFlow 2.0 channel last 8723 2052 2024 97
TensorLayer 2.0 channel last 8723 2010 2007 95

Getting Involved

Please read the Contributor Guideline before submitting your PRs.

We suggest users to report bugs using Github issues. Users can also discuss how to use TensorLayer in the following slack channel.



Citing TensorLayer

If you find TensorLayer useful for your project, please cite the following papers:

@article{tensorlayer2017,
    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
    journal = {ACM Multimedia},
    title   = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
    url     = {http://tensorlayer.org},
    year    = {2017}
}

@inproceedings{tensorlayer2021,
  title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
  author={Lai, Cheng and Han, Jiarong and Dong, Hao},
  booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
  pages={1--3},
  year={2021},
  organization={IEEE}
}

tensorlayerx's People

Contributors

hanjr92 avatar hishambarakat16 avatar ivorfeng avatar jianzhnie avatar laicheng0830 avatar luka0612 avatar quantumliu avatar qzhiyue avatar zsdonghao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorlayerx's Issues

latest-gpu-py3 dockerfile build reports a line of errors [NO_PUBKEY]

New Issue Checklist

Issue Description

在容器中使用tensorlayer,但在build latest-gpu-py3镜像时出现一行错误
执行过程和错误内容如下:

sudo docker build -f Dockerfile --build-arg TF_CONTAINER_VERSION="latest-gpu-py3" .

Sending build context to Docker daemon  13.31kB
Step 1/6 : ARG TF_CONTAINER_VERSION
Step 2/6 : FROM tensorflow/tensorflow:${TF_CONTAINER_VERSION}
 ---> e2a4af785bdb
Step 3/6 : LABEL version="1.0" maintainer="Jonathan DEKHTIAR <[email protected]>"
 ---> Using cache
 ---> dd0d507fcf29
Step 4/6 : ARG TL_VERSION
 ---> Using cache
 ---> f8ba110bef14
Step 5/6 : ARG TF_CONTAINER_VERSION
 ---> Using cache
 ---> d5b90af347f8
Step 6/6 : RUN echo "Container Tag: ${TF_CONTAINER_VERSION}"     && apt-get update     && case $TF_CONTAINER_VERSION in             latest-py3 | latest-gpu-py3) apt-get install -y python3-tk  ;;             *)                           apt-get install -y python-tk ;;         esac     && if [ -z "$TL_VERSION" ]; then         echo "Building a Nightly Release"         && apt-get install -y git         && mkdir /dist/ && cd /dist/         && git clone https://github.com/tensorlayer/tensorlayer.git         && cd tensorlayer         && pip install --disable-pip-version-check --no-cache-dir --upgrade -e .[all];     else         echo "Building Tag Release: $TL_VERSION"         && pip install  --disable-pip-version-check --no-cache-dir --upgrade tensorlayer[all]=="$TL_VERSION";     fi     && apt-get autoremove -y     && rm -rf /var/lib/apt/lists/*
 ---> Running in 73251fe2d227
Container Tag: latest-gpu-py3
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:5 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [22.8 kB]
Get:6 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [957 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [29.8 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2286 kB]
Get:11 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [2798 kB]
Get:9 https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu1804/x86_64  InRelease [1581 B]
Err:9 https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu1804/x86_64  InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC
Ign:10 https://developer.download.nvidia.cn/compute/machine-learning/repos/ubuntu1804/x86_64  InRelease
Get:12 https://developer.download.nvidia.cn/compute/machine-learning/repos/ubuntu1804/x86_64  Release [564 B]
Get:13 https://developer.download.nvidia.cn/compute/machine-learning/repos/ubuntu1804/x86_64  Release.gpg [833 B]
Get:14 https://developer.download.nvidia.cn/compute/machine-learning/repos/ubuntu1804/x86_64  Packages [73.8 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [991 kB]
Get:16 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1512 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [3231 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [12.9 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [12.2 kB]
Reading package lists...
W: GPG error: https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu1804/x86_64  InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A4B469963BF863CC
E: The repository 'https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  InRelease' is no longer signed.
The command '/bin/bash -c echo "Container Tag: ${TF_CONTAINER_VERSION}"     && apt-get update     && case $TF_CONTAINER_VERSION in             latest-py3 | latest-gpu-py3) apt-get install -y python3-tk  ;;             *)                           apt-get install -y python-tk ;;         esac     && if [ -z "$TL_VERSION" ]; then         echo "Building a Nightly Release"         && apt-get install -y git         && mkdir /dist/ && cd /dist/         && git clone https://github.com/tensorlayer/tensorlayer.git         && cd tensorlayer         && pip install --disable-pip-version-check --no-cache-dir --upgrade -e .[all];     else         echo "Building Tag Release: $TL_VERSION"         && pip install  --disable-pip-version-check --no-cache-dir --upgrade tensorlayer[all]=="$TL_VERSION";     fi     && apt-get autoremove -y     && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100

自己在下面链接中找到一些提示,想问下是要修改Dockfile文件吗?改哪里呢?感谢大佬!
更新 CUDA Linux GPG 存储库密钥

Reproducible Code

  • OS:Ubuntu20.04
  • Docker: 20.10.16
  • no added code

tensorlayerx.nn以paddle为backend时没有MaxUnPool2D/Pad2d,但paddle.nn有

(1)paddle.nn.MaxUnPool2D(kernel_size, stride=None, padding=0, data_format='NCHW', output_size=None, name=None)
构建 MaxUnPool2D 类的一个可调用对象,根据输入的 input 和最大值位置计算出池化的逆结果。所有非最大值设置为零。
image
更多函数说明见https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/MaxUnPool2D_cn.html

(2)class paddle.nn.Pad2D(padding, mode='constant', value=0.0, data_format='NCHW', name=None)
按照 padding、mode 和 value 属性对输入进行填充。更多函数说明见https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/Pad2D_cn.html

tlx.split的pytorch版本实现错误

New Issue Checklist

Issue Description

tlx.split的pytorch版本实现错误。tlx.split的第2个参数num_or_size_splits类型为int时,表示需要拆分成的tensor数量;但是在torch.split中第2个参数的意义为拆分出的每个tensor中在该维度上的长度。因此需要在调用前将该参数进行转换。

tensorlayerx.ops.Pad不支持“channels_first”的data_format,后续会补充“channels_first”的格式吗?

New Issue Checklist

Issue Description

[INSERT DESCRIPTION OF THE PROBLEM]

Reproducible Code

  • Which OS are you using ?
  • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

[INSERT CODE HERE]

# ======================================================== #
###### tensorlayerx.ops.Pad源码######
# ======================================================== #

class Pad(object):

    def __init__(self, paddings, mode="REFLECT", constant_values=0):
        if mode not in ['CONSTANT', 'REFLECT', 'SYMMETRIC']:
            raise Exception("Unsupported mode: {}".format(mode))
        if mode == 'SYMMETRIC':
            raise NotImplementedError
        self.paddings = paddings
        self.mode = mode.lower()
        self.constant_values = constant_values

    def __call__(self, x):
        if len(x.shape) == 3:
            data_format = 'NLC'
            self.paddings = self.correct_paddings(len(x.shape), self.paddings, data_format)
        elif len(x.shape) == 4:
            data_format = 'NHWC'
            self.paddings = self.correct_paddings(len(x.shape), self.paddings, data_format)
        elif len(x.shape) == 5:
            data_format = 'NDHWC'
            self.paddings = self.correct_paddings(len(x.shape), self.paddings, data_format)
        else:
            raise NotImplementedError('Please check the input shape.')
        return pd.nn.functional.pad(x, self.paddings, self.mode, value=self.constant_values, data_format=data_format)

    def correct_paddings(self, in_shape, paddings, data_format):
        if in_shape == 3 and data_format == 'NLC':
            correct_output = [paddings[1][0], paddings[1][1]]
        elif in_shape == 4 and data_format == 'NHWC':
            correct_output = [paddings[2][0], paddings[2][1], paddings[1][0], paddings[1][1]]
        elif in_shape == 5 and data_format == 'NDHWC':
            correct_output = [
                paddings[3][0], paddings[3][1], paddings[2][0], paddings[2][1], paddings[1][0], paddings[1][1]
            ]
        else:
            raise NotImplementedError('Does not support channels first')
        return correct_output

tensorlayerx.nn.ConvTranspose2d没有output_padding参数,但paddle.nn.Conv2DTranspose有

paddle源码class paddle.nn.Conv2DTranspose(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, dilation=1, weight_attr=None, bias_attr=None, data_format='NCHW')
其中,output_padding (int|list|tuple, optional): 输出形状上一侧额外添加的大小. 默认值: 0.

tensorlayerx(paddle为backend)源码
class tensorlayerx.nn.ConvTranspose2d( out_channels=32,kernel_size=(3, 3),stride=(1, 1),act=None,padding='SAME',
data_format='channels_last',dilation=(1, 1),W_init='truncated_normal', b_init='constant', in_channels=None, name=None, # 'conv2d_transpose',
)

Pytorch后端NHWC和NCHW问题

New Issue Checklist

Issue Description

Pytorch后端模型定义NHWC和NCHW数据格式主要是定以数据和模型后传到设备时用.to("cuda:0", memory_format=torch.channels_last)确定。

TLX目前做法是pytorch依据nhwc格式时,全部转NCHW然后处理完转回来,这潜在是让模型用NCHW格式计算。对纯GPU应用时问题不大,但是对于一些NHWC友好的设备部署,比如未来的Mindspore,由于多次nhwc nchw切换,性能有损失。

这里可能需要框架对于pytorch这里nhwc支持改成全局变量,即输入时数据做nchw-nhwc,模型转nhwc然后计算即可。

不过Pytorch本身GPU NHWC支持稀烂,倒不是很急。

tensorlayerx.argmax没有参数keepdim,但paddle.argmax有

paddle源码paddle.argmax(x, axis=None, keepdim=False, dtype='int64', name=None)
其中keepdim(bool,可选)- 是否在输出Tensor中保留减小的维度。如果 keepdim 为True,则输出Tensor和 x 具有相同的维度(减少的维度除外,减少的维度的大小为1),默认值为False。

tensorlayerx源码paddle.argmax(x, axis=None, dtype='int64')

tlx 没有与paddle.nn.functional模块对应的max_pool2d,avg_pool2d,conv2d_transpose,max_unpool2d,normalize算子

(1)paddle.nn.functional.max_pool2d()函数用来构建 max_pool2d 类的一个可调用对象,其将构建一个二维平均池化层,根据输入参数 kernel_size, stride, padding 等参数对输入做最大池化操作。
image
更多函数说明见https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/max_pool2d_cn.html

(2)paddle.nn.functional.avg_pool2d()函数是一个二维平均池化函数,其将构建一个二维平均池化层,根据输入参数 kernel_size, stride, padding 等参数对输入做平均池化操作。
image
更多函数说明见https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/avg_pool2d_cn.html

(3)paddle.nn.functional.conv2d_transpose()函数是二维转置卷积层(Convlution2D transpose layer),该层根据输入(input)、卷积核(kernel)和空洞大小(dilations)、步长(stride)、填充(padding)来计算输出特征层大小或者通过 output_size 指定输出特征层大小。输入(Input)和输出(Output)为 NCHW 或 NHWC 格式,其中 N 为批尺寸,C 为通道数(channel),H 为特征层高度,W 为特征层宽度。卷积核是 MCHW 格式,M 是输出图像通道数,C 是输入图像通道数,H 是卷积核高度,W 是卷积核宽度。如果组数大于 1,C 等于输入图像通道数除以组数的结果。转置卷积的计算过程相当于卷积的反向计算。转置卷积又被称为反卷积(但其实并不是真正的反卷积)。欲了解转置卷积层细节,请参考下面的说明和 参考文献_。如果参数 bias_attr 不为 False,转置卷积计算会添加偏置项。如果 act 不为 None,则转置卷积计算之后添加相应的激活函数。
更多函数说明见https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/conv2d_transpose_cn.html

(4)paddle.nn.functional.max_unpool2d(x, indices, kernel_size, stride=None, padding=0, data_format='NCHW', output_size=None, name=None)实现了 2D 最大反池化 操作
image
更多函数说明见https://www.paddlepaddle.org.cn/documentaon/docs/zh/api/paddle/nn/functional/max_unpool2d_cn.html

(5)paddle.nn.functional.normalize(x, p=2, axis=1, epsilon=1e-12, name=None)
image

LayerNorm layer parameter initialization

New Issue Checklist

Issue Description

请问,tlx中LayerNorm层参数初始化方式是怎样的,发现执行 tlx.nn.LayerNorm(num) 语句后,虽然给gamma_init、beta_init赋了默认值,但参数维度向量的数值并不能拿到,可训练的权重向量为空。

tlx version:0.5.6

Reproducible Code

a = tlx.nn.LayerNorm(128)

# compare
b = paddle.nn.LayerNorm(128)

DataLoader()报错,0.5.6可以正常训练,最新版本会报错,报错如下:

Traceback (most recent call last):
File "D:/sthq/code/tensorlayerX/train_vision.py", line 136, in
train_tlx('fastfcn')
File "D:/sthq/code/tensorlayerX/train_vision.py", line 123, in train_tlx
to_static_training=cfg.to_static_training
File "D:\sthq\code\tensorlayerX\tlx_models\core\train.py", line 146, in train
for i in loader:
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 417, in next
data = self._next_data()
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 438, in _next_data
data = self._dataset_fetcher.fetch(index)
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 350, in fetch
return self.collate_fn(data)
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 308, in default_collate
return default_collate_paddle(batch)
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 184, in default_collate_paddle
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 184, in
return elem_type({key: default_collate([d[key] for d in batch]) for key in elem})
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 308, in default_collate
return default_collate_paddle(batch)
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 173, in default_collate_paddle
return default_collate([paddle.to_tensor(b) for b in batch])
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 308, in default_collate
return default_collate_paddle(batch)
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\tensorlayerx\dataflow\utils.py", line 165, in default_collate_paddle
return paddle.stack(batch, 0)
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\paddle\tensor\manipulation.py", line 903, in stack
return layers.stack(x, axis, name)
File "F:\anaconda\envs\tensorlayerX\lib\site-packages\paddle\fluid\layers\nn.py", line 10397, in stack
return _C_ops.stack(x, 'axis', axis)
RuntimeError: (NotFound) Operator stack does not have kernel for data_type[uint8_t]:data_layout[Undefined(AnyLayout)]:place[Place(cpu)]:library_type[PLAIN].
[Hint: Expected kernel_iter != kernels.end(), but received kernel_iter == kernels.end().] (at C:\home\workspace\Paddle_release\paddle\fluid\imperative\prepared_operator.cc:403)
[operator < stack > error]

进程已结束,退出代码 1

Support of tlx for nested structure network

New Issue Checklist

Issue Description

请问,tensorlayerx 目前支持 paddle 做后端,然后运行包含嵌套结构关系的网络模型吗?我运行 tlx 提供的 example 例子,发现 nested_usage_of_layer.py 这个文件以 tf 做后端可以运行,但改成 paddle 做后端,会提示报错,错误提示:

Traceback (most recent call last):
  File "D:/ProjectByPython/code/reference/DL_Platform/TensorLayer/TensorLayerX-0.5.6/examples/basic_tutorials/nested_usage_of_layer.py", line 159, in <module>
    grad = tape.gradient(_loss_ce, train_weights)
  File "D:\Software\miniconda3\envs\py37env_tlx\lib\site-packages\tensorflow_core\python\eager\backprop.py", line 984, in gradient
    if not t.dtype.is_floating:
AttributeError: 'paddle.fluid.core_avx.VarType' object has no attribute 'is_floating'

Reproducible Code

python: 3.7
paddlepaddle version: 2.3.0
tensorlayerx version: 0.5.6

代码就是例子中的 nested_usage_of_layer.py 这个文件。
nested_usage_of_layer.txt

自己在转换其它存在嵌套关系的网络模型时,如 resnet 时,发现以 paddle 做后端也可以成功运行,但是结果不准确,BatchNorm2d 这一层开始的输出就不再一样了。

load pretrained model from .pth

I write a model using Pytorch, and save its state_dict() to .pth. Now I want to use tensorlayerx to write it, so other people (using tensorflow etc.) can use this model.
My model definition is same in Pytorch and Tensorlayerx, but I can't load pretrained model of .pth in tensorlayerx.
Below is my code. (simple model is used here for clarity, the actual model is more complex than this)

"""
a_torch.py
"""
import torch
from torch import nn

class A(nn.Module):
    def __init__(self):
        super(A, self).__init__()
        self.conv = nn.Conv2d(3, 16, kernel_size=1)
        self.bn = nn.BatchNorm2d(16)
        self.relu = nn.ReLU(inplace=True)
    
    def forward(self, x):
        return self.act(self.bn(self.conv(x)))

if __name__ == '__main__':
    a = A()
    torch.save(a.state_dict(), 'a.pth')
"""
a_tlx.py
"""
import tensorlayerx as tlx
import torch
from tensorlayerx import nn

class A(nn.Module):
    def __init__(self):
        super(A, self).__init__()
        self.conv = nn.Conv2d(16, kernel_size=1, data_format='channels_first')
        self.bn = nn.BatchNorm2d(num_features=16, data_format='channels_first')
        self.relu = nn.activation.ReLU()
    
    def forward(self, x):
        return self.act(self.bn(self.conv(x)))

def pth2npz(pth_path):
    temp = torch.load(pth_path)   # type(temp) = OrderedDict
    tlx.files.save_npz_dict(temp.items(), pth_path.split('.')[0] + '.npz')

if __name__ == '__main__':
    a = A()
    pth2npz('a.pth')
    tlx.files.load_and_assign_npz_dict('a.npz', a)

First run a_torch.py, then run a_tlx.py.
The error is below.

Using PyTorch backend.
Traceback (most recent call last):
  File "test/test_03.py", line 25, in <module>
    tlx.files.load_and_assign_npz_dict('test/a.npz', a)
  File "/home/mchen/anaconda3/envs/kpconv/lib/python3.8/site-packages/tensorlayerx/files/utils.py", line 2208, in load_and_assign_npz_dict
    raise RuntimeError(
RuntimeError: Weights named 'conv.weight' not found in network. Hint: set argument skip=Ture if you want to skip redundant or mismatch weights

Then I debug and look at the tlx.files.load_and_assign_npz_dict() source code. I find tensorlayerx parameter name is different from PyTorch. This results in key mismatch when loading pre-trained model.
In the following two figures, the first is the parameter name of PyTorch and the second is the parameter name of TensorLayerx.
屏幕截图 2022-08-07 202607
屏幕截图 2022-08-07 202555
Now the solution I can think of is to write a key map table, but it is hard for large model. So can you give me a simple solution ? (same model definition in pytorch and tensorlayerx, load pretrained model in .pth) 😁

tensorlayerx.nn.PRelu() does not have build in initialisation,self.build() -> inputs_shape no description

class PRelu(Module):
r"""Applies the element-wise function:

.. math::
    \text{PReLU}(x) = \max(0,x) + a * \min(0,x)

Parameters
----------
num_parameters : int
    number of `a` to learn.  1, or the number of channels at input. Default: 1
init : float
    the initial value of `a`. Default: 0.25
data_format : str
    Data format that specifies the layout of input. It may be 'channels_last' or 'channels_first'. Default is 'channels_last'.
name : None or str
    A unique layer name.

Examples
-----------
>>> inputs = tlx.nn.Input([10, 5, 10])
>>> prelulayer = tlx.nn.PRelu(num_parameters=5, init=0.25, data_format='channels_first')(inputs)

References
-----------
- `Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification <http://arxiv.org/abs/1502.01852>`__
- `Convolutional Deep Belief Networks on CIFAR-10 [A. Krizhevsky, 2010] <http://www.cs.utoronto.ca/~kriz/conv-cifar10-aug2010.pdf>`__

"""

def __init__(
    self, num_parameters = 1, init=0.25,  data_format='channels_last', name=None,
):

    super(PRelu, self).__init__(name)
    self.num_parameters = num_parameters
    self.init = init
    self.data_format = data_format

    logging.info("PRelu %s: num_parameters: %s" % (self.name, self.num_parameters))

def __repr__(self):
    s = ('{classname}(')
    s += 'num_parameters={num_parameters},'
    s += 'init={init},'
    s += 'name={name}'
    s += ')'
    return s.format(classname=self.__class__.__name__, **self.__dict__)

def build(self, inputs_shape):
    dim = len(inputs_shape)
    if self.data_format == 'channels_last':
        w_shape = (self.num_parameters, )
    elif self.data_format == 'channels_first':
        if dim == 4:
            w_shape = (1, self.num_parameters, 1, 1)
        elif dim == 3:
            w_shape = (1, self.num_parameters, 1)
        elif dim == 5:
            w_shape = (1, self.num_parameters, 1, 1, 1)
        elif dim < 3:
            w_shape = (self.num_parameters, )

    self.alpha = self._get_weights("alpha", shape=w_shape, init=tlx.initializers.constant(value=self.init))
    self.prelu = tlx.ops.PReLU(data_format = self.data_format)

def forward(self, inputs):
    if self._forward_state == False:
        if self._built == False:
            self.build(tlx.get_tensor_shape(inputs))
            self._built = True
        self._forward_state = True

    output = self.prelu(inputs, self.alpha)

    if not self._nodes_fixed and self._build_graph:
        self._add_node(inputs, output)
        self._nodes_fixed = True
    return output

net.set_eval() seems not work well

Issue Description

When I test my pspnet model, I find if not use "with torch.no_grad()" or "gradient()", the gpu memory will be full after testing several photos. I guess set_eval() function seems to have failed. Or I used the wrong method to test? This is my code, thank you!

In addition, I found that the batch size will affect the final test results. If net. eval() is not performed in the pytorch, it will cause similar problems. It seems that this is caused by the BatchNorm layer.

    os.environ['TL_BACKEND'] = 'torch'
    tlx.set_device(device='GPU', id=3)
    # ...
    net = models[backend]()
    net.load_weights('test.npz', format='npz_dict', skip=True)
    test_dataset = MyDataset(root_dir="test/")
    test_loader = DataLoader(test_dataset, batch_size=4, shuffle=True)

    train_weights = net.trainable_weights
    scheduler = tlx.optimizers.lr.StepDecay(learning_rate=0, step_size=30, gamma=0.5, last_epoch=-1)
    optimizer = tlx.optimizers.Adam(lr=scheduler)

    hist = np.zeros((num_classes, num_classes))
    net.set_eval()
    # with torch.no_grad():
    for x, y, y_cls in test_loader:
        _out, _out_cls = net(x)
        seg_loss = tlx.losses.softmax_cross_entropy_with_logits(_out, y)
        cls_loss = tlx.losses.sigmoid_cross_entropy(_out_cls, y_cls)
        _loss = seg_loss + 1 * cls_loss
        # grads = optimizer.gradient(_loss, train_weights)
        # optimizer.apply_gradients(zip(grads, train_weights))
        '''
            compute miou matrix
        '''
        out = tlx.convert_to_numpy(_out)
        y = tlx.convert_to_numpy(y)
        out = np.argmax(out, axis=1)
        for i in range(0, out.shape[0]):
            pred = out[i]
            gt = y[i]
            hist += fast_hist(gt.flatten(), pred.flatten(), num_classes)
            
    # compute miou then print
    mIoUs = per_class_iu(hist)
    for ind_class in range(num_classes):
        print('===>' + name_classes[ind_class] + ':\t' + str(round(mIoUs[ind_class] * 100, 2)))
    print('===> mIoU: ' + str(round(np.nanmean(mIoUs) * 100, 2)))
    print("test loss: {}".format(train_loss))

tenorlayerx.nn没有paddle.nn.InstanceNorm2D对应的算子

paddle.nn.InstanceNorm2D(num_features, epsilon=1e-05, momentum=0.9, weight_attr=None, bias_attr=None, data_format="NCHW", name=None)
image
更多见接口文档https://www.paddlepaddle.org.cn/documentation/docs/zh/2.3/api/paddle/nn/InstanceNorm2D_cn.html#instancenorm2d

# ======================================================== #
###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
# ======================================================== #
import tensorlayerx as tlx

tensorlayerx.nn.UpSampling2d当data_format="channels_first"和paddle.nn.Upsample输出结果维度不一致

New Issue Checklist

Issue Description

[INSERT DESCRIPTION OF THE PROBLEM]

Reproducible Code

  • Which OS are you using ?
  • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

[INSERT CODE HERE]

# ======================================================== #
###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
# ======================================================== #
import os
import paddle
os.environ['TL_BACKEND'] = 'paddle'
import tensorlayerx as tlx

tlx_ni = tlx.nn.Input([4, 32, 50, 50], name='input')
tlx_out = tlx.nn.UpSampling2d(scale=(2, 2), data_format="channels_first")(tlx_ni)
print(f"tlx_out.shape={tlx_out.shape}")

pd_ni = paddle.rand([4, 32, 50, 50], dtype="float32")
pd_out = paddle.nn.Upsample(scale_factor=2, data_format="NCHW")(pd_ni)
print(f"pd_out.shape={pd_out.shape}")

# ======================================================== #
###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
# ======================================================== #

输出结果
tlx_out.shape=[4, 32, 64, 100]
pd_out.shape=[4, 32, 100, 100]

🔥 【TODO List】

Contributions are welcome! You are also welcome to make new requests to the community!

Framework

Mission Contributor Description Status
Core @Laicheng0830 @hanjr92 Core codes of TensorLayerX framework Maintaining
🔥Distributed training @hishambarakat16 @QuantumLiu Platform-agnostic distributed training (model and data parallelism) Contributions and discussions are welcome
Codes converter Converting codes of other framework to TensorlayerX Contributions and discussions are welcome

Contact: [email protected] Dr.Dong

Tools

Mission Contributor Description Status
TLX2ONNX @Laicheng0830 @hanjr92 TLX models export as onnx format Developing
OpenIVA @QuantumLiu end-to-end intelligent vision analytics development toolkit Developing
Codes converter Converting codes of other framework to TensorlayerX Contributions and discussions are welcome

Contact: [email protected] @Laicheng0830

Documentations

Mission Contributor Description Status
中文 @QuantumLiu Chinese docs Developing
EN Maintaining
Arabic @QuantumLiu Arabic docs Developing
Russian Russian docs Contributions and discussions are welcome

Contact: [email protected] WeChat liuyiliang100

CV algorithms

Mission Contributor Description Status
ResNet @Luka0612 ResNet image classification Done, maintaining
VGGNet @Luka0612 VGG Net image classification Done, maintaining
DETR @Luka0612 Object detection based on Transformer Done, maintaining
YOLO V4 @Luka0612 YOLO V4 object detection Done, maintaining
🔥 PP-YOLO-E @xxx Cloud-edge-device integration object detection Developing
UNET @Luka0612 Semantic segmentation Done, maintaining
HRNet @Luka0612 Human Pose Estimation Done, maintaining
TROCR @Luka0612 Transformer-based OCR Character Recognition Done, maintaining
retinaface @Luka0612 Face Detection Done, maintaining
arcface @Luka0612 Facial feature embedding Done, maintaining
Face landmark pfld @xxx 68 facial landmarks Developing

Contact: [email protected] WeChat liuyiliang100

NLP algorithms

Mission Contributor Description Status
T5 NMT @Luka0612 T5 NMT Done, maintaining
T5 text classification @Luka0612 T5 Text classification Done, maintaining
BERT text classification @Luka0612 BERT Text classification Done, maintaining
T5 token classification @Luka0612 T5 token classification Done, maintaining
BERT text classification @Luka0612 BERT token classification Done, maintaining

Reinforcement Learning Algorithms

Mission Contributor Description Status
RLzoo Reinforcement Learning Toolkit Converting to TLX version

Graph Learning

Task Contributors Description Status
Graph Library @clearhanhui @Theheavens @zsy0828 @dddg617 Graph Library Finished and Improving

AutoML

ImportError: cannot import name 'distributed_init' from 'tensorlayerx.backend.ops.load_backend'

New Issue Checklist

Issue Description

When running with tensorflow as backend, it gives this error which relates to the tensorlayerx package.

ImportError: cannot import name 'distributed_init' from 'tensorlayerx.backend.ops.load_backend'

I think the issue is caused by the add torch distribution commit, where there is no distributed_init() in tensorflow_backend.py.

Reproducible Code

  • Which OS are you using ? Windows 10, Python3.8, Cuda10.2,

  • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

I'm trying to run the SRGAN project train.py --mode=eval with tensorflow as backend and with the given pretrained tensorflow weights.

tlx.nn.Swish()与paddle.nn.Swish()的结果有细微差别

tlx:

[-0.16246916, 1.40204561, 0.85213524, ..., 0.85800600,
1.10605156, 1.11549926],
[-0.04873780, 0.28885114, 0.15792340, ..., 0.12375022,
0.22599602, 0.53073120],
[-0.09840852, 0.40172467, 0.15602632, ..., 0.09853011,
0.29177830, 0.52241892]

paddle:

[-0.16246916, 1.40204573, 0.85213524, ..., 0.85800600,
1.10605145, 1.11549926],
[-0.04873780, 0.28885114, 0.15792342, ..., 0.12375022,
0.22599602, 0.53073120],
[-0.09840852, 0.40172467, 0.15602632, ..., 0.09853011,
0.29177833, 0.52241892]

tensorlayerx没有优化函数的基类, 只能使用tlx.optimizers.paddle_optimizers.Optimizer来判断

New Issue Checklist

Issue Description

[INSERT DESCRIPTION OF THE PROBLEM]

Reproducible Code

  • Which OS are you using ?
  • Please provide a reproducible code of your issue. Without any reproducible code, you will probably not receive any help.

[INSERT CODE HERE]

# ======================================================== #
###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
# ======================================================== #
# paddle
import paddle
x = 13
print(isinstance(x, paddle.optimizers.Optimizer))

# tensorlayer
import os
os.environ['TL_BACKEND'] = 'paddle'
import tensorlayer as tlx
x = 13
print(isinstance(x, tlx.optimizers.paddle_optimizers.Optimizer))
# ======================================================== #
###### THIS CODE IS AN EXAMPLE, REPLACE WITH YOUR OWN ######
# ======================================================== #

Paddle的图像分类模型适配tlx算子后,分类模型预测结果不准

New Issue Checklist

Issue Description

描述:
以图像分类模型为例,尝试将Paddle中已实现好的分类模型转到TensorLayerX框架上运行,测试可行性。
预训练模型权重文件用的是paddle官网的vgg16。

步骤(可参考下方代码):

  1. 将Paddle的VGG16模型的Conv、Pooling、Norm、Linear算子替换成TensorLayerX模型对应算子
  2. 加载Paddle的VGG16预训练模型参数,更新到转换后的VGG16网络结构中
  3. 验证测试图像的分类预测结果,对比Paddle的VGG16模型和转换后模型对同一张测试图像预测结果是否一致

问题疑问:

  1. 转换后vgg16模型和paddle的vgg16模型预测结果差异比较大,预测结果不准
  2. 对比替换前后分析发现,vgg16在第一层conv2d的计算结果差距就很明显,分类输出结果差距更大导致预测结果非常不准
  3. paddle和tensorlayerx的conv2d算子差异?

Reproducible Code

  • paddlepaddle version: 2.2.0
  • tensorlayerx version: 0.5.3
import os
os.environ['TL_BACKEND'] = 'paddle'
# os.environ['TL_BACKEND'] = 'tensorflow'
import tensorlayerx.nn as nn
from tensorlayerx import logging
from tensorlayerx.files import assign_weights
from ch1_clas.pd_download import get_weights_path_from_url
import numpy as np
import paddle
from paddle import to_tensor
from PIL import Image
import copy

__all__ = []

model_urls = {
    'tlxvgg16': ('https://paddle-hapi.bj.bcebos.com/models/vgg16.pdparams',
              '89bbffc0f87d260be9b8cdc169c991c4'),
    'tlxvgg19': ('https://paddle-hapi.bj.bcebos.com/models/vgg19.pdparams',
              '23b18bb13d8894f60f54e642be79a0dd')
}


class VGG(nn.Module):
    """VGG model from
    `"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>`_
    Args:
        features (nn.Layer): Vgg features create by function make_layers.
        num_classes (int): Output dim of last fc layer. If num_classes <=0, last fc layer 
                            will not be defined. Default: 1000.
        with_pool (bool): Use pool before the last three fc layer or not. Default: True.
    Examples:
        .. code-block:: python
            from paddle.vision.models import VGG
            from paddle.vision.models.vgg import make_layers
            vgg11_cfg = [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M']
            features = make_layers(vgg11_cfg)
            vgg11 = VGG(features)
    """

    def __init__(self, features, num_classes=1000, with_pool=True):
        super(VGG, self).__init__()
        self.features = features
        self.num_classes = num_classes
        self.with_pool = with_pool

        if self.with_pool:
            # self.avgpool = nn.AdaptiveAvgPool2D((7, 7))
            self.avgpool = nn.AdaptiveMeanPool2d((7, 7))

        if num_classes > 0:
            self.classifier = nn.Sequential(
                nn.Linear(out_features=4096, act=None, in_features=512 * 7 * 7),
                nn.ReLU(),
                nn.Linear(out_features=4096, act=None, in_features=4096),
                nn.ReLU(),
                nn.Linear(in_features=4096, out_features=num_classes),
            )

    def forward(self, x):
        print(self.features[0](x).shape)
        x = self.features(x)
        if self.with_pool:
            x = self.avgpool(x)
        if self.num_classes > 0:
            x = paddle.flatten(x, 1)
            print('x.numpy =', x.shape)
            x = self.classifier(x)
        return x


def make_layers(cfg, batch_norm=False):
    layers = []
    in_channels = 3
    for v in cfg:
        if v == 'M':
            layers += [nn.MaxPool2d(kernel_size=2, stride=2, padding=0)]  # padding默认为'SAME'
        else:
            conv2d = nn.Conv2d(out_channels=v, kernel_size=(3, 3), stride=(1, 1), act=None, padding=1, in_channels=in_channels)
            if batch_norm:
                layers += [conv2d, nn.BatchNorm2d(num_features=v), nn.ReLU()]
            else:
                layers += [conv2d, nn.ReLU()]
            in_channels = v
    return nn.Sequential(*layers)


cfgs = {
    'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'B':
    [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'D': [
        64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512,
        512, 512, 'M'
    ],
    'E': [
        64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512,
        'M', 512, 512, 512, 512, 'M'
    ],
}
####################新增pd2tlx#################
pd2tlx = {'features.0.weight': 'features.0.W',
         'features.2.weight': 'features.2.W',
         'features.5.weight': 'features.5.W',
         'features.7.weight': 'features.7.W',
         'features.10.weight': 'features.10.W',
         'features.12.weight': 'features.12.W',
         'features.14.weight': 'features.14.W',
         'features.17.weight': 'features.17.W',
         'features.19.weight': 'features.19.W',
         'features.21.weight': 'features.21.W',
         'features.24.weight': 'features.24.W',
         'features.26.weight': 'features.26.W',
         'features.28.weight': 'features.28.W',
         'features.0.bias': 'features.0.b',
         'features.2.bias': 'features.2.b',
         'features.5.bias': 'features.5.b',
         'features.7.bias': 'features.7.b',
         'features.10.bias': 'features.10.b',
         'features.12.bias': 'features.12.b',
         'features.14.bias': 'features.14.b',
         'features.17.bias': 'features.17.b',
         'features.19.bias': 'features.19.b',
         'features.21.bias': 'features.21.b',
         'features.24.bias': 'features.24.b',
         'features.26.bias': 'features.26.b',
         'features.28.bias': 'features.28.b',
         'classifier.0.weight': 'classifier.0.W',
         'classifier.3.weight':'classifier.2.W',
         'classifier.6.weight':'classifier.4.W',
         'classifier.0.bias': 'classifier.0.b',
         'classifier.3.bias':'classifier.2.b',
         'classifier.6.bias':'classifier.4.b'}


def get_new_weight(param):
    '''新增函数,调整参数key'''
    new_param = {}
    for key in param.keys():
        new_param[pd2tlx[key]] = param[key]
        print(key, ":", param[key].shape, "vs", pd2tlx[key],":", new_param[pd2tlx[key]].shape)
    return new_param


def restore_model(param, model, model_type='vgg16'):
    """ 直接restore """
    weights = []
    if model_type == 'vgg16':
        for val in param.items():
        # for val in sorted(param.items()):
            weights.append(val[1])
            if len(model.all_weights) == len(weights):
                break
    elif model_type == 'vgg19':
        pass
    # assign weight values
    assign_weights(weights, model)
    del weights


def _tlxvgg(arch, cfg, batch_norm, pretrained, **kwargs):
    model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm), **kwargs)
    if pretrained:
        assert arch in model_urls, "{} model do not have a pretrained model now, you should set pretrained=False".format(
            arch)
        weight_path = get_weights_path_from_url(model_urls[arch][0],
                                                model_urls[arch][1])
        param = paddle.load(weight_path)
        # model.load_dict(param)
        new_param = get_new_weight(param)
        model.load_dict(new_param)
        # restore_model(param, model)
    return model


def tlxvgg16(pretrained=False, batch_norm=False, **kwargs):
    """VGG 16-layer model 
    
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet. Default: False.
        batch_norm (bool): If True, returns a model with batch_norm layer. Default: False.
    Examples:
        .. code-block:: python
            from paddle.vision.models import vgg16
            # build model
            model = vgg16()
            # build vgg16 model with batch_norm
            model = vgg16(batch_norm=True)
    """
    model_name = 'tlxvgg16'
    if batch_norm:
        model_name += ('_bn')
    return _tlxvgg(model_name, 'D', batch_norm, pretrained, **kwargs)


if __name__ == "__main__":
    from utils.load_image import load_image_tlx

    model = tlxvgg16(pretrained=True, batch_norm=False)
    # model = tlxvgg16(pretrained=True, batch_norm=True)
    # tlx.paddle:[1, 224, 224, 3], paddle:[1, 3, 224, 224]
    x = load_image_tlx("../images/dog.jpeg")
    out = model(x)
    # np.save("tmp/tlx_output.npy", np.array(out)[0])

    file_path = '../images/imagenet_classes.txt'
    with open(file_path) as f:
        classes = [line.strip() for line in f.readlines()]
    print(classes[np.argmax(out[0])])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.