Giter VIP home page Giter VIP logo

pynetspresso's Introduction



🌟 STMicro x NetsPresso 🌟
STM32 model zoo


Use NetsPresso for a seamless model optimization process. NetsPresso resolves AI-related constraints in business use cases and enables cost-efficiency and enhanced performance by removing the requirement for high-spec servers and network connectivity and preventing high latency and personal data breaches.

Easily compress various models with our resources. Please browse the Docs for details, and join our Discussion Forum for providing feedback or sharing your use cases.

To get started with NetsPresso, you'll need to sign up here.

We offer a comprehensive guide to walk you through the process of optimizing an AI model using NetsPresso. A full tutorial can be found Google Colab.




Step Type Description
Train np.trainer Build and train a model.
Model Zoo
Image Classification PyTorch-CIFAR-Models
Object Detection YOLO Fastest
YOLOX
YOLOv5
YOLOv7
Semantic Segmentation PIDNet
Pose Estimation YOLOv8
Compress np.compressor Compress and optimize the user’s model.
Convert np.converter Convert and quantize the user’s model to run efficiently on device.
Benchmark np.benchmarker Benchmark the user's model to measure model inference speed on diverse device.

Installation

Prerequisites

  • Python 3.8 | 3.9 | 3.10
  • PyTorch 1.13.0 (recommended) (compatible with: 1.11.x - 1.13.x)
  • TensorFlow 2.8.0 (recommended) (compatible with: 2.3.x - 2.8.x)

Install with PyPI (stable)

pip install netspresso

To use editable mode or docker, see INSTALLATION.md.

Getting started

Login

Log-in to your netspresso account. Please sign-up here if you need one.

from netspresso import NetsPresso

netspresso = NetsPresso(email="YOUR_EMAIL", password="YOUR_PASSWORD")

Trainer

Train

To start training a model, first select a task.

Then configure the dataset, model, augmentation, and hyperparameters.

Once setup is finished, enter the GPU number and project name for training.

from netspresso.enums import Task
from netspresso.trainer.optimizers import AdamW
from netspresso.trainer.schedulers import CosineAnnealingWarmRestartsWithCustomWarmUp
from netspresso.trainer.augmentations import Resize


# 1. Declare trainer
trainer = netspresso.trainer(task=Task.OBJECT_DETECTION)  # IMAGE_CLASSIFICATION, OBJECT_DETECTION, SEMANTIC_SEGMENTATION

# 2. Set config for training
# 2-1. Data
trainer.set_dataset_config(
    name="traffic_sign_config_example",
    root_path="/root/traffic-sign",
    train_image="images/train",
    train_label="labels/train",
    valid_image="images/valid",
    valid_label="labels/valid",
    id_mapping=["prohibitory", "danger", "mandatory", "other"],
)

# 2-2. Model
print(trainer.available_models)  # ['EfficientFormer', 'YOLOX-S', 'ResNet', 'MobileNetV3', 'MixNetL', 'MixNetM', 'MixNetS']
trainer.set_model_config(model_name="YOLOX-S", img_size=512)

# 2-3. Augmentation
trainer.set_augmentation_config(
    train_transforms=[Resize()],
    inference_transforms=[Resize()],
)

# 2-4. Training
optimizer = AdamW(lr=6e-3)
scheduler = CosineAnnealingWarmRestartsWithCustomWarmUp(warmup_epochs=10)
trainer.set_training_config(
    epochs=40,
    batch_size=16,
    optimizer=optimizer,
    scheduler=scheduler,
)

# 3. Train
training_result = trainer.train(gpus="0, 1", project_name="PROJECT_TRAIN_SAMPLE")

Retrain

To start retraining a model, use hparams.yaml file which is one of the artifacts generated during the training of the original model.

Then, enter the compressed model path, which is an artifact of the compressor in fx_model_path.

Adjust the training hyperparameters as needed. (See 2-2. for detailed code.)

from netspresso.trainer.optimizers import AdamW

# 1. Declare trainer
trainer = netspresso.trainer(yaml_path="./temp/hparams.yaml")

# 2. Set config for retraining
# 2-1. FX Model
trainer.set_fx_model(fx_model_path="./temp/FX_MODEL_PATH.pt")

# 2-2. Training
optimizer = AdamW(lr=6e-3)
trainer.set_training_config(
    epochs=30,
    batch_size=16,
    optimizer=optimizer,
)

# 3. Train
retraining_result = trainer.train(gpus="0, 1", project_name="PROJECT_RETRAIN_SAMPLE")

Compressor

Compress (Automatic compression)

To start compressing a model, enter the model path to compress and the appropriate compression ratio.

The compressed model will be saved in the specified output directory (output_dir).

# 1. Declare compressor
compressor = netspresso.compressor()

# 2. Run automatic compression
compression_result = compressor.automatic_compression(
    input_shapes=[{"batch": 1, "channel": 3, "dimension": [224, 224]}],
    input_model_path="./examples/sample_models/graphmodule.pt",
    output_dir="./outputs/compressed/pytorch_automatic_compression",
    compression_ratio=0.5,
)

Converter

Convert

To start converting a model, enter the model path to convert and the target framework and device name.

For NVIDIA GPUs and Jetson devices, enter the software version additionally due to the jetpack version.

The converted model will be saved in the specified output directory (output_dir).

from netspresso.enums import DeviceName, Framework, SoftwareVersion

# 1. Declare converter
converter = netspresso.converter()

# 2. Run convert
conversion_result = converter.convert_model(
    input_model_path="./examples/sample_models/test.onnx",
    output_dir="./outputs/converted/TENSORRT_JETSON_AGX_ORIN_JETPACK_5_0_1",
    target_framework=Framework.TENSORRT,
    target_device_name=DeviceName.JETSON_AGX_ORIN,
    target_software_version=SoftwareVersion.JETPACK_5_0_1,
)

Benchmarker

Benchmark

To start benchmarking a model, enter the model path to benchmark and the target device name.

For NVIDIA GPUs and Jetson devices, device name and software version have to be matched with the target device of the conversion.

TensorRT Model has strong dependency with the device type and its jetpack version.

from netspresso.enums import DeviceName, SoftwareVersion

# 1. Declare benchmarker
benchmarker = netspresso.benchmarker()

# 2. Run benchmark
benchmark_result = benchmarker.benchmark_model(
    input_model_path="./outputs/converted/TENSORRT_JETSON_AGX_ORIN_JETPACK_5_0_1/TENSORRT_JETSON_AGX_ORIN_JETPACK_5_0_1.trt",
    target_device_name=DeviceName.JETSON_AGX_ORIN,
    target_software_version=SoftwareVersion.JETPACK_5_0_1,
)
print(f"model inference latency: {benchmark_result['result']['latency']} ms")
print(f"model gpu memory footprint: {benchmark_result['result']['memory_footprint_gpu']} MB")
print(f"model cpu memory footprint: {benchmark_result['result']['memory_footprint_cpu']} MB")
Supported options for Converter & Benchmarker

Frameworks that support conversion for model's framework

Target / Source Framework ONNX TENSORFLOW_KERAS TENSORFLOW
TENSORRT ✔️
DRPAI ✔️
OPENVINO ✔️
TENSORFLOW_LITE ✔️ ✔️ ✔️

Devices that support benchmarks for model's framework

Device / Framework ONNX TENSORRT TENSORFLOW_LITE DRPAI OPENVINO
RASPBERRY_PI_5 ✔️ ✔️
RASPBERRY_PI_4B ✔️ ✔️
RASPBERRY_PI_3B_PLUS ✔️ ✔️
RASPBERRY_PI_ZERO_W ✔️ ✔️
RASPBERRY_PI_ZERO_2W ✔️ ✔️
ARM_ETHOS_U_SERIES ✔️(only INT8)
ALIF_ENSEMBLE_E7_DEVKIT_GEN2 ✔️(only INT8)
RENESAS_RA8D1 ✔️(only INT8)
NXP_iMX93 ✔️(only INT8)
ARDUINO_NICLA_VISION ✔️(only INT8)
RENESAS_RZ_V2L ✔️ ✔️
RENESAS_RZ_V2M ✔️ ✔️
JETSON_NANO ✔️ ✔️
JETSON_TX2 ✔️ ✔️
JETSON_XAVIER ✔️ ✔️
JETSON_NX ✔️ ✔️
JETSON_AGX_ORIN ✔️ ✔️
JETSON_ORIN_NANO ✔️ ✔️
AWS_T4 ✔️ ✔️
INTEL_XEON_W_2233 ✔️

Software versions that support conversions and benchmarks for specific devices

Software Versions requires only Jetson Device. If you are using a different device, you do not need to enter it.

Software Version / Device JETSON_NANO JETSON_TX2 JETSON_XAVIER JETSON_NX JETSON_AGX_ORIN JETSON_ORIN_NANO
JETPACK_4_4_1 ✔️
JETPACK_4_6 ✔️ ✔️ ✔️ ✔️
JETPACK_5_0_1 ✔️
JETPACK_5_0_2 ✔️
JETPACK_6_0 ✔️

The code below is an example of using software version.

conversion_result = converter.convert_model(
    input_model_path=INPUT_MODEL_PATH,
    output_dir=OUTPUT_DIR,
    target_framework=Framework.TENSORRT,
    target_device_name=DeviceName.JETSON_AGX_ORIN,
    target_software_version=SoftwareVersion.JETPACK_5_0_1,
)
benchmark_result = benchmarker.benchmark_model(
    input_model_path=CONVERTED_MODEL_PATH,
    target_device_name=DeviceName.JETSON_AGX_ORIN,
    target_software_version=SoftwareVersion.JETPACK_5_0_1,
)

Hardware type that support benchmarks for specific devices

Benchmark and compare models with and without Arm Helium.

RENESAS_RA8D1 and ALIF_ENSEMBLE_E7_DEVKIT_GEN2 are available for use.

The benchmark results with Helium can be up to twice as fast as without Helium.

The code below is an example of using hardware type.

benchmark_result = benchmarker.benchmark_model(
    input_model_path=CONVERTED_MODEL_PATH,
    target_device_name=DeviceName.RENESAS_RA8D1,
    target_data_type=DataType.INT8,
    target_hardware_type=HardwareType.HELIUM
)

Guide to Credit Consumption by Module

Module Feature Credit
Compressor Automatic compression 25
Advanced compression 50
Converter Convert 50
Benchmarker Benchmark 25

Contact

Join our Discussion Forum for providing feedback or sharing your use cases, and if you want to talk more with Nota, please contact us here.
Or you can also do it via email([email protected]) or phone(+82 2-555-8659)!

pynetspresso's People

Contributors

cbpark-nota avatar daechanim avatar only-bottle avatar sanggeonpark avatar youngkim1104 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

pynetspresso's Issues

[Sprint] Divide Launcher into Converter and Benchmarker

Description

  • Divide Launcher into Converter and Benchmarker.
    • Launcher -> Converter & Benchmarker

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Implement json data management functions

Description

  • Implement a function that manages json data to be used by GUI.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Enhance folder existence check and creation

Description

  • Add logic that automatically creates folders if there are no folders in the path you entered when you download the model

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

Login failed with SSL certificate verification error

I'm trying to use the Netspresso Python client to log in to my account using the SessionClient class, but I'm facing an SSL certificate verification error. Here are the details:

I'm using the latest version of the Netspresso Python client.
I'm using my Netspresso account email and password to log in.
I'm getting the following error message:

Login failed. Error: HTTPSConnectionPool(host='searcher.netspresso.ai', port=443): Max retries exceeded with url: /api/v1/auth/local/login (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))

I believe this error is related to SSL certificate verification, but I'm not sure how to fix it. Can someone from the Netspresso team help me troubleshoot this issue?

Thanks in advance for your help!

[Sprint] Add PyPI packaging workflow

Description

  • Add pypi packaging with github workflow

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add github folder download

Description

  • Add github_download to download netspresso-trainer config folder

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add jupyter notebook examples

Description

  • Add jupyter notebook examples

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Implement the status management function of a task

Description

  • Implementation of functions that manage the state of train, compress, convert, benchmark

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Update logging message & error message

Description

  • Provides readable messages and improve tone.
  • Provide solutions and examples.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Show credit remaining after each module has been used

Description

  • Show the remaining credits of the user after using the module.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Sync example code with default value

Description

  • The compression option value for the example code must be set to Default.

As-is

OPTIONS = Options(
    policy=Policy.AVERAGE, layer_norm=LayerNorm.TSS_NORM, group_policy=GroupPolicy.COUNT, reshape_channel_axis=-1
)

To-be

Options(
    policy=Policy.AVERAGE, layer_norm=LayerNorm.STANDARD_SCORE, group_policy=GroupPolicy.AVERAGE, reshape_channel_axis=-1
)

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Modify to build trainer instance with original trainer config for retraining

Description

  • Modify to build trainer instance with original trainer config for retraining.

ex)

# For Training
trainer = Trainer(task=Task.OBJECT_DETECTION)

# For Retraining
retrainer = Trainer(task=Task.OBJECT_DETECTION, from_config="config_path")

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add convert (TFLite int8) and benchmark for Alif Ensemble DevKit Gen2 and Renesas RA8D1

Description

  • Add convert (TFLite int8) and benchmark for Alif Ensemble DevKit Gen2 and Renesas RA8D1

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

Todo

  • Add int8 data type for TFLite converting
  • Add Alif Ensemble DevKit Gen2 adn Renesas RA8D1 on target devices

[Sprint] Fix recommendation compression

Description

  • There is a bug in the process of assigning the recommendation result to the available layer.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[BUG] Version dependency for `typing_extensions`

Describe the bug

The recent libraries with FastAPI require an up-to-date typing_extensions (e.g. 4.8.0)

The version requirements are too specific, so it is hard to install with other libraries...

Plus, both pydantic==1.10.4 and fastapi, other dependencies on the library, are satisfied with typing-extensions==4.8.0.
However, the current dependency typing_extensions==4.5.0 shows an error.

This would be solved if the requirements.txt should be fixed to:

pydantic==1.10.8
typing-extensions>=4.8.0

Have you searched existing issues? 🔎

  • I have searched and found no existing issues

Reproduction

Just

pip install netspresso

(Use with library)

throws an error.

Screenshot

No response

Logs

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
fastapi 0.104.1 requires typing-extensions>=4.8.0, but you have typing-extensions 4.5.0 which is incompatible.
pydantic-core 2.14.3 requires typing-extensions!=4.7.0,>=4.6.0, but you have typing-extensions 4.5.0 which is incompatible.

System Info

Python 3.8
General with all OS systems

[Sprint] Modify output_path as folder path rather than file path

Description

  • Currently, we are receiving output_path as file path, but we will modify it to receive it as folder path.

As-is

compressed_model = compressor.automatic_compression(
    ...
    output_path="./outputs/compressed/compressed_model.h5"
)

To-be

compressed_model = compressor.automatic_compression(
    ...
    output_path="./outputs/compressed/automatic_05"
)

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Improve compression, convert, and benchmark usability

Description

  • Improve compression, convert, and benchmark usability.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Refactoring API Clients

Description

  • Refactoring API Clients(Compressor, Launcher)

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add onnx export automatically after compression

Description

  • Add exporting to ONNX for immediate use with the converter and benchmark after compression

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Feature Request] Add release notes

  • I have searched to see if a similar issue already exists.

Is there any feature that you would like to add?
PyNetspresso does not have a release note. It should add a release note containing a previous version.

Is there any suggestion (or solution) to solve this issue?
Please write a clear and concise description.

Additional context
All other contexts or screenshots are welcome.

Additional guide for pytorch checkpoint + torch checkpoint converting

Compressor API requests the specific type of pytorch checkpoint, fx graphmodule, but there's no guide in README

According to docs, the model can be converted with the following commands:

Compressor

  • pytorch checkpoint (state_dict) -> fx graphmodule
import torch.fx
from torchvision.models import resnet18, ResNet18_Weights

model = resnet18(weights=ResNet18_Weights)
graph = torch.fx.Tracer().trace(model)
traced_model = torch.fx.GraphModule(model, graph)
torch.save(traced_model, "resnet18.pt")

Launcher

  • pytorch checkpoint (state_dict) -> onnx
from torchvision.models import resnet18
import torch
from torch.onnx import TrainingMode

input_tensor = torch.rand(torch.Size([1, 3, 224, 224]))
model = resnet18(pretrained=True)
dummy_output = model(input_tensor)
torch.onnx.export(model, input_tensor, "resnet18.onnx", verbose=True, training=TrainingMode.TRAINING)

I'll add some descriptions about torch checkpoint and checkpoint converting features if needed.

[Sprint] Change YOLOX to YOLOX-S

Description

  • Change YOLOX to YOLOX-S.
    • Currently, trainer only provides YOLOX-S pre-trained weight.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Fix `reissue_token` bug & Remove instance variable

Description

  • Fix reissue_token endpoint
    • token -> auth/token
  • Remove instance variable(email & password)

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add assertion for file_path or dir_path

Description

  • Add assertion for file_path or dir_path.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Update User's Guide

Description

  • Improve the user's guide of PyNetsPresso.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Rename `model_trainer.py` to `trainer.py`

Description

  • Rename model_trainer.py to trainer.py.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Update example code

Description

  • Update example code.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Remove validate_token decorator

Description

  • Remove validate_token decorator.
    • It is inconvenient to check the parameters and return values of the function because of the decoder

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

Allow `fp16` input tensor to apply with `fp16` saved model

Problem

From @illian01

Compressor

  • Current API gets input tensor as input_shape
  • Some models can be saved with half() (i.e. fp16), which cannot infer with float32 input tensor .
    • This model should give the example input tensor whose dtype is same with the model (float16)
  • Need additional feature for users to select the data type of example input tensor
  • Our back-end has already supported different input types of tensors, and it doesn't affect to compress the model

Related Link

Slack: https://nota-workspace.slack.com/archives/C040F65LSAJ/p1693546735327899

[Sprint] Apply the updated version of the trainer

Description

  • Apply the v0.1.2 version of the trainer.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Remove enum value from return of module

Description

  • Remove enum value from return of module.
    • Instead of enum value, it would be nice to make it look like a string value.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Rename Trainer, Compressor, Converter, Benchmarker

Description

  • Remove the "Model" prefix from the name of each module.
  • ex) ModelTrainer -> Trainer

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add typing to a function or parameter

Description

  • Add typing to a function or parameter.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add plotter for visualization on jupyter notebook

Description

  • Add visualization of result on Jupyter notebook.(ex. latency, metrics, flops, etc)

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Change `number_of_layers` default value

Description

  • Change the default value of number_of_layers from 0 to None.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Remove unused functions

Description

  • The module's artifacts are stored locally and the lookup function is no longer required.
    • ex) get_models, get_compression .. etc

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

Add files for Community Standards

  • I have searched to see if a similar issue already exists.

Is there any feature that you would like to add? (추가했으면 하는 feature가 어떤 것일까요?)
Community Standards을 위한 파일 추가

  • Github Issue & PR Template
  • CODE_OF_CONDUCT
  • CONTRIBUTING
  • SECURITY

Is there any suggestion (or solution) to solve this issue? (이를 해결하기 위한 아이디어가 있으신가요?)
Please write a clear and concise description (간결하고 명확하게 작성해주세요)

Additional context (전후 맥락을 돕기 위한 내용들)
All other contexts or screenshots are welcome. (노트, 토의 내용, 스크린샷 등 모든 추가 정보를 넣어주세요)

[Sprint] Change os.path to Pathlib

Description

  • Change it to Pathlib because we are using both os.path and Pathlib.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Improve and unify input parameters by module

Description

  • The input is received in the file_path, and the output is received in the dir_path.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Move the enum file in the trainer to a common enum

Description

  • Move the enum file in the trainer to a common enum.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Change `task` & `framework` parameters to optional in compressor

Description

  • Change task & framework parameters to optional in compressor
    • The default for the task is other.
    • The default for the framework is pytorch.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Improve netspresso interface

Description

  • Improve netspresso interface

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Handling exceptions for nuclear norm

Description

  • Currently, PyTorch does not support Nuclear Norm(PR_NN).

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add npy file transfer for int8 quantization

Description

  • Add npy file transfer for int8 quantization.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add the paths of each module's artifacts to metadata

Description

  • Add the paths of each module's artifacts to metadata.
    • ex) best_pt_path, best_onnx_path, compressed_model_path, converted_model_path .. etc

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Add Dockerfile & Upload Docker image to Docker Hub

Description

  • Upload the docker image to the hub so that users can use it directly as a docker image instead of a pip install.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

[Sprint] Integrate netspresso-trainer

Description

  • Enable training with netspresso-trainer on pynp.

Checklists

  • I would create a corresponding branch for this issue from the designated (mostly develop) branch.
  • This issue only contains a preceding agreement between project owners.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.