Giter VIP home page Giter VIP logo

anylabeling's Introduction

👋 Hi!, I'm Viet Anh | Software Engineer | Maker | Writer.

Work Hard, Build More, and Share Stories. Always Curious.

Open Source Projects:

vietanhdev's Stats vietanhdev's Stats

anylabeling's People

Contributors

liaozihang avatar martenkiehn avatar qqqhhh-any avatar scottix avatar vietanhdev avatar vietthanhnv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

anylabeling's Issues

for help, pyinstaller how to use gpu

I use this command
pyinstaller anylabeling.spec
when running this .exe,but it can not use gpu.
how i can use pyinstaller build a .exe using gpu?
thanks

It seems the last commit broke the app

Hello, I am running the app.py

With your new commit and it seems this brakes the app since there's no parent package

Traceback (most recent call last):
  File "app.py", line 12, in <module>
    from .resources.resources import *  # noqa
ImportError: attempted relative import with no known parent package

I'll try to fix it.

Why does this point still exist after the finished object, which will cause subsequent markup errors

  1. File "E:\anaconda\envs\exe\lib\site-packages\anylabeling\views\labeling\label_file.py", line 170, in save
    version=version,
    NameError: name 'version' is not defined

2.File "E:\anaconda\envs\exe\lib\site-packages\anylabeling\views\labeling\widgets\label_list_widget.py", line 181, in find_item_by_shape
raise ValueError(f"cannot find shape: {shape}")
ValueError: cannot find shape: <anylabeling.views.labeling.shape.Shape object at 0x000001B51CE1AE20>

3.my file 微信截图_20230412153612

微信截图_20230412153631

4.The good ones were deleted manually

Multi Object Tracking Support

It would be great if tool can be useful for labeling multi object tracker. An idea is to add object id along with detection info. In general, there are very few options for multi object tracking. Let me know if i can help with you implementing it.

Adaptive size.

Does your program support the function of inputting images with different sizes from those preset in ”yaml“?

Some issues with 0.1.1

Thank you for solving the problem that the point cannot be removed in 0.9.0, but in version 0.1.1, when I use + point and then finish object, I need to re-enter the object category every time,
微信截图_20230413094510
but after I mark it twice with + rect (enter the object category, it doesn’t work once), switch +point and the object category will appear.
微信截图_20230413095512
But at this time the category name does not appear on the input box. Need to switch to other categories first,
微信截图_20230413095544
微信截图_20230413095557
I don't know if it is a bug, but I hope to have the same label setting, just like Labelimg. Will make repetitive work easier.

Run on CPU

I get the following error when running in cpu environment.
"[ WARN:[email protected]] global net_impl.cpp:174 setUpNet DNN module was not built with CUDA backend; switching to CPU"

SAM encoder ONNXRuntimeError

Running the latest version of anylabeling with SAM model will crash:

2023-05-04 14:37:55.442148422 [W:onnxruntime:Default, tensorrt_execution_provider.h:63 log] [2023-05-04 06:37:55 WARNING] nx_tensorrt-src/onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
2023-05-04 14:37:55.442272866 [W:onnxruntime:Default, tensorrt_execution_provider.h:63 log] [2023-05-04 06:37:55 WARNING] nx_tensorrt-src/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
2023-05-04 14:37:55.459027396 [W:onnxruntime:Default, tensorrt_execution_provider.h:63 log] [2023-05-04 06:37:55 WARNING] nx_tensorrt-src/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
2023-05-04 14:37:55.477279852 [E:onnxruntime:Default, tensorrt_execution_provider.h:61 log] [2023-05-04 06:37:55   ERROR] [shuffleNode.cpp::symbolicExecute::392] Error Code 4: Internal Error (/blocks.0/Reshape: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
2023-05-04 14:37:55.478303947 [W:onnxruntime:Default, tensorrt_execution_provider.h:63 log] [2023-05-04 06:37:55 WARNING] nx_tensorrt-src/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
2023-05-04 14:37:55.553881868 [E:onnxruntime:, inference_session.cc:1532 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:897 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: /blocks.0/Pad_output_0 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs

Traceback (most recent call last):
  File "./code/SAM/anylabeling/anylabeling/utils.py", line 15, in run
    self.func(*self.args, **self.kwargs)
  File "./code/SAM/anylabeling/anylabeling/services/auto_labeling/model_manager.py", line 151, in _load_model
    model_info["model"] = SegmentAnything(
  File "./code/SAM/anylabeling/anylabeling/services/auto_labeling/segment_anything.py", line 74, in __init__
    self.encoder_session = onnxruntime.InferenceSession(
  File "./miniconda3/envs/anylabeling/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "./miniconda3/envs/anylabeling/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 408, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:897 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: /blocks.0/Pad_output_0 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs
[1]    335899 IOT instruction (core dumped)  anylabeling

Combine anylabeling with Grounded-SAM

Hi @vietanhdev

I love your project and I want to support your project with some suggestions.
I have a suggestion for you, maybe you can combine anylabeling with repository Grounded-SAM because Grounded-SAM project has zero shot object detection and makes anylabeling more better.

Importing annotations to anylabeling

Hello from a fellow Vietnamese @vietanhdev,

Thank you for releasing this amazing product as open source. I currently have a lot of auto-generated masks that need to be checked and corrected if necessary. These masks are from SAM with a lot of post-processing.
The current format of these masks are binary masks.

I'm wondering if it's possible to import masks to anylabeling and edit?

TIA

Encounter onnxruntime crash when trying to use tensorrt run quantized onnx model

Hi, thanks for your great work.

I am trying to improve the performance of anylabeling when the GPU and tensorrt backend is available.

I prepared several steps:

  1. Download your vit-b quantized onnx model
  2. Convert the model by using the symbolic_shape_infer.py from the official document
  3. Enable "trt_int8_enable" in TRT executor provider option

But I met following error:

2023-05-26 08:27:12.631758183 [W:onnxruntime:Default, tensorrt_execution_provider.cc:1210 GetCapability] [TensorRT EP] No graph will run on TensorRT execution provider
2023-05-26 08:27:13.136179864 [W:onnxruntime:, session_state.cc:1136 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-05-26 08:27:13.136197010 [W:onnxruntime:, session_state.cc:1138 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
(2048, 2048, 3)
2023-05-26 08:27:13.995464756 [E:onnxruntime:Default, cuda_call.cc:119 CudaCall] CUDA failure 1: invalid argument ; GPU=1 ; hostname=vision ; expr=cudaMemcpyAsync(output.MutableDataRaw(), input.DataRaw(), input.Shape().Size() * input.DataType()->Size(), cudaMemcpyDeviceToDevice, stream); 
2023-05-26 08:27:13.995665084 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running Einsum node. Name:'/blocks.0/attn/Einsum' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/einsum_utils/einsum_auxiliary_ops.cc:298 std::unique_ptr<onnxruntime::Tensor> onnxruntime::EinsumOp::Transpose(const onnxruntime::Tensor&, const onnxruntime::TensorShape&, const gsl::span<const long unsigned int>&, onnxruntime::AllocatorPtr, void*, const Transpose&) 21Einsum op: Transpose failed: CUDA failure 1: invalid argument ; GPU=1 ; hostname=vision ; expr=cudaMemcpyAsync(output.MutableDataRaw(), input.DataRaw(), input.Shape().Size() * input.DataType()->Size(), cudaMemcpyDeviceToDevice, stream); 

terminate called after throwing an instance of 'onnxruntime::OnnxRuntimeException'
  what():  /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:124 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:117 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 700: an illegal memory access was encountered ; GPU=1 ; hostname=vision ; expr=cudaEventDestroy(event_); 

I saw you manually filter the TRT executer.
Have you ever met similar issue like this?
Thanks in advance.

Error when downloading certain SAM models

Hi,

I'm trying to label an image with the help of the SAM models, but only some of them work. ViT-B Quant works without problems, but when selecting ViT-L Quant or ViT-H Quant, I'm getting an exception and the program crashes. The full traceback is below. Seems weird, because when I open the link mentioned in the traceback using safari it works.

Any idea what is going wrong here?

(Great job on the tool btw, this is exactly what I was looking for and I'm excited to get it working!!)

Traceback (most recent call last):
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/urllib/request.py", line 1354, in do_open
    h.request(req.get_method(), req.selector, req.data, headers,
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/http/client.py", line 1256, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/http/client.py", line 1302, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/http/client.py", line 1251, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/http/client.py", line 1011, in _send_output
    self.send(msg)
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/http/client.py", line 951, in send
    self.connect()
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/http/client.py", line 1418, in connect
    super().connect()
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/http/client.py", line 922, in connect
    self.sock = self._create_connection(
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/socket.py", line 787, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/socket.py", line 918, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/model.py", line 123, in get_model_abs_path
    urllib.request.urlretrieve(
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/urllib/request.py", line 247, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/urllib/request.py", line 525, in open
    response = self._open(req, data)
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/urllib/request.py", line 542, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/urllib/request.py", line 502, in _call_chain
    result = func(*args)
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/urllib/request.py", line 1397, in https_open
    return self.do_open(http.client.HTTPSConnection, req,
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/urllib/request.py", line 1357, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known>

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/site-packages/anylabeling/utils.py", line 15, in run
    self.func(*self.args, **self.kwargs)
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/model_manager.py", line 148, in _load_model
    model_info["model"] = SegmentAnything(
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/segment_anything.py", line 52, in __init__
    encoder_model_abs_path = self.get_model_abs_path(
  File "/Users/tobias/opt/anaconda3/envs/anylabeling/lib/python3.8/site-packages/anylabeling/services/auto_labeling/model.py", line 128, in get_model_abs_path
    raise Exception(
Exception: Could not download model from https://github.com/vietanhdev/anylabeling-assets/releases/download/v0.2.0/segment_anything_vit_l_encoder_quant.onnx: <urlopen error [Errno 8] nodename nor servname provided, or not known>

CUDA not able to loaded

When I try to choose any auto model I get the error below:

Error in loading model: D: \a\_work\1\s\onnxruntime\python\onnxruntime pybind state.cc: 537 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html # requirements), make sure they are in the path and your GPU is supported.

Screenshot_3

I install the executable file and try with Conda and got the same error
my system is Windows 11 with GPU RTX 3080 python 3.10.8

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:59:34_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0

torch.cuda.is_available()
True

Could anyone help me with this problem?

Delete label

I accidentally created a label, and want to delete it. How can I do this?

How SAM generates segmentation annotations from points?

Hi Viet Anh,

I would like to ask you some questions. When I use SAM to do segmentation annotation, I use the point pattern. How does SAM determine the split area based on the points I enter?

Looking forward to your answer!

bug: App crashes while loading images on intel mac

Steps to reproduce:

  • git clone repo
  • setup venv
  • install anylabeling and requirements.txt
  • run anylabeling

When attempting to import image files, the app crashes with the following error:

  File "/Users/sm/Documents/repos/anylabeling/.venv/lib/python3.10/site-packages/anylabeling/views/labeling/widgets/canvas.py", line 263, in mouseMoveEvent
    pos = self.transform_pos(ev.localPos())
  File "/Users/sm/Documents/repos/anylabeling/.venv/lib/python3.10/site-packages/anylabeling/views/labeling/widgets/canvas.py", line 899, in transform_pos
    return point / self.scale - self.offset_to_center()
  File "/Users/sm/Documents/repos/anylabeling/.venv/lib/python3.10/site-packages/anylabeling/views/labeling/widgets/canvas.py", line 911, in offset_to_center
    return QtCore.QPoint(x, y)
TypeError: arguments did not match any overloaded call:
  QPoint(): too many arguments
  QPoint(int, int): argument 1 has unexpected type 'float'
  QPoint(QPoint): argument 1 has unexpected type 'float'

I've tried to modify the offset_to_center file so that it passes int to the QPoint function, but I still receive the same error.

Load Custom Model

I have a yolov5 model which trained in a custom dataset,I want to load it in the type of torchscript and label the rest of dataset.It seems that only standard yolov5/v8/SAM model can be loaded now

Shortcuts to reduce the labeling time.

Point +, Point -, Rect....to add shortcuts to this functions can be helpful for fast labeling. Including add shape to label only pressing a number from example. Mapping the classes to numbers and pressing only the number the shape can be added to this class, every time that i create a shape i spend more time refining and selecting the class of the shape.
Thanks for the effort.

about single class save to file

when I use sam model to label objects. I selected my object in image,but every time I have to click Finish Object Button or keypress F to decide name.I think it is not good way ,when I just label one class name in image,I have to press F to select it evey time.so when single class,you can give a default label name,so I need not click Finish Object Button anymore?I hope writer make single class label better.

where is the onnx file path?

hi, thank for ur sharing, when i choose auto seg, the exe reminding loading samXXX.onnx file, but it's can not download successfully, so i try to download the pretrained SAM onnx file by hand, but i do not know where to place the file for AnyLabeling.exe

How should I configure GPU inference using the exe installation package?

After selecting the model, it will report this error.
image

My local environment is cuda11.7+cudnn8.6+onnxruntime-gpu1.14.0. All of the above have added environment variables. CUDA_ PATH is also configured correctly. I have also placed the DLL file for onnxruntime-gpu1.14.0 in the running directory.
image
The model using YOLOV8N did not report any errors, but the task manager did not use GPU inference.
@vietanhdev May I ask what I should do?

Crash when loading SAM Model

Using GTX1650 GPU.

Traceback (most recent call last):
File "D:\anaconda3\envs\anylabeling\lib\site-packages\anylabeling\utils.py", line 15, in run
self.func(*self.args, **self.kwargs)
File "D:\anaconda3\envs\anylabeling\lib\site-packages\anylabeling\services\auto_labeling\model_manager.py", line 151, in _load_model
model_info["model"] = SegmentAnything(
File "D:\anaconda3\envs\anylabeling\lib\site-packages\anylabeling\services\auto_labeling\segment_anything.py", line 74, in init
self.encoder_session = onnxruntime.InferenceSession(
File "D:\anaconda3\envs\anylabeling\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 360, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "D:\anaconda3\envs\anylabeling\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 408, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1106 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\anaconda3\envs\anylabeling\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"

Protobuf parsing failed

Hi, thank you for your great work I get this error when I try auto labeling
Traceback (most recent call last):
File "E:\Anaconda\envs\cubercnn\lib\site-packages\anylabeling\utils.py", line 15, in run
self.func(*self.args, **self.kwargs)
File "E:\Anaconda\envs\cubercnn\lib\site-packages\anylabeling\services\auto_labeling\model_manager.py", line 118, in _load_model
model_info["model"] = SegmentAnything(
File "E:\Anaconda\envs\cubercnn\lib\site-packages\anylabeling\services\auto_labeling\segment_anything.py", line 57, in init
self.encoder_session = onnxruntime.InferenceSession(
File "E:\Anaconda\envs\cubercnn\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 360, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "E:\Anaconda\envs\cubercnn\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 397, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from C:\Users\user\data\models\segment_anything\vit_b-encoder-quant.onnx failed:Protobuf parsing failed.

How to use SAM with GPU?

I found when segment with SAM in anylabeling, the GPU is not used. How to address this issue?

how to delete the hole?

for example, I want to label donuts, but need to erase holes.
i do not know how to delete this hole.
this image is same as donut case, how to label?

image
image
i want to get 2nd image result, but i do not have any idea.
(i use labelme and labelme cant delete middle hole)

GroundingDINO

Whether you can add GroundingDINO?
This model can annotate images using natural language.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.