alibaba / lightweight-neural-architecture-search Goto Github PK
View Code? Open in Web Editor NEWThis is a collection of our zero-cost NAS and efficient vision applications.
License: Apache License 2.0
This is a collection of our zero-cost NAS and efficient vision applications.
License: Apache License 2.0
May I ask deepmad_ R18_ Structure in FLOPs.py_ What is btn in info?
How to write a configuration script for a specific network model?
I want to compare the MAE-DET model with YoloV5,
but cannot find whether the MAE-DET use threshold 0.5 or 0.5:0.95 to compute mAP.
respect
I went over how you derived the entropy of the MLP. I found that the final formula needed to be based on the standard normal distribution assumption in box 1, which also resulted in the entropy of the MLP being dependent only on the model structure of the MLP itself (the width of each layer and the depth of the network), not on the weight and input of the network.
So when I use this formula based on the standard normal distribution hypothesis to design the MLP network, can I really not care about the weight of the network and the input distribution?
File "/data/syj/miniconda3/envs/train2.6/lib/python3.7/site-packages/mmcv/utils/registry.py", line 164, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'MadNas is not in the backbone registry'
detection的训练,在根据readme.md进行操作后出现KeyError,但看起来应该注册过了啊?请问有人能够提供帮助吗
May I ask if there is code related to the paper (deepmad) Guideline 3. Non-Decreasing Number of Channels part. I can't find your constraint in (https://github.com/alibaba/lightweight-neural-architecture-search/blob/main/tinynas/spaces/mutator/super_res_k1kxk1_mutator.py)
适用于DAMO-YOLO的MAE-NAS教程链接已经失效了,请补正
Could you please provide step by step procedure?
I'm wondering how to use the 3D CNN you provided in the project, when we modify the parameters in config files, it will always raise some surprising errors. Could you please give us some instructions about the usage of 3D CNN
As shown in the config file config_nas.py
, is it only the V100 or t40 are supported here?
If i want to use 2 * 2080Ti, how can i change that?
Hello, thanks for your great work. I run scripts/damo-yolo/example_k1kx_small.sh twice, two best_structure.txt results are different, and is different from your damo-yolo-s structure。 I wonder if it is normal?
希望能够在一个移动设备上实现目标检测算法,因此需要构建移动设备的延迟推理库。根据tinynas/latency/op_profiler/README.md中所述,可以通过tinynas/latency/op_profiler/python/下的一系列模块实现。但是在执行
sh sample.sh
的时候,我发现需要一个名为venus_eval_test_uclibc的文件。经过搜索并未查找到所需要的文件。我想请问我该如何解决这个问题。
Thanks for giving me a new insight of MAD method of NAS. But after training it in my pipeline, the result is still not good. Can you give me a training pipeline? I really can't think of any other issues.
Thanks a lot!
The constraint limit the btn in
in paper said that we can use cpu with small memory source ,but when i run sh tools/dist_search.sh configs/classification/deepmad_29M_224.py
32G memory was used in 10 sec and the job has been aborted.
If I want to use the Deepmap method to create a specific parameter quantity resnet or create my own resnet101, how should I call the script? Is this open source
I read your paper and understand how the entropy score is calculated without using any data, but does the EA part of the program use real data? I am trying to figure out how to use your program on my own dataset. I see at https://github.com/alibaba/lightweight-neural-architecture-search/blob/main/scripts/classification/README.md you describe how to use the searched model in my own pipeline, but do I use my own data at all to train your program?
May I ask how to choose the image_size? Because the image_size seems to be different between https://github.com/tinyvision/DAMO-YOLO/blob/49dd0dbdcbf9ec67cb9c7b1ae2b571f71449b9db/configs/damoyolo_tinynasL20_T.py#L26 and
Besides, I would like to ask why the input channel is 12 instead of 3 in
The FLOPs calculation during the search is missing the multiplication by 2.
def get_flops(self, resolution):
# return self.flops * resolution**2
return self.flops * resolution ** 2 * 2
I got error when i use it like this: dict(type = "max_feature", budget = 0.5e6),
for MBV2 the error is Searcher: 'ResK1DWK1' object has no attribute 'nbitsA'
for R50 the error is get_max_feature_num() got an unexpected keyword argument 'nbitsA_out'
Thanks for this amazing repo. I'm currently working on training an efficient low-precision backbone and deploying it on an ARM Cortex-M7 MCU device with limited resources (512kB RAM, 2MB Flash). I believe I need to convert the mixed-precision quantization model to a tflite model to achieve this.
Could you please guide how to perform this conversion and deployment? Thanks.
具体信息如下
This XML file does not appear to have any style information associated with it. The document tree is shown below.
NoSuchKey
The specified key does not exist.
655818022C55923437916A7E
idstcv.oss-cn-zhangjiakou.aliyuncs.com
DeepMAD/DeepMAD-R18/R50.pth.tar
0026-00000001
https://api.aliyun.com/troubleshoot?q=0026-00000001
mae-det计算特征图方差是对应着代码里的那一部分?是将其转化为计算通道数的方式了吗?与deepmad类似,socre里的madnas?
Open MPI is not supported under windows.
Thanks for sharing the code of such an interesting work! Just got a few questions when reproducing the code:
The links to the pre-trained image classification models (RXX-like.pth) seem like broken. Do you have any backup links for them?
Will you plan to release the code for training these image classification models (RXX-like.pth) by any chance?
I followed the instructions to deploy the pre-trained object detection models on GFL-V2; I was able to reproduce the similar results for maedet-s (box_mAP: 0.451) and maedet-l (box_mAP: 0.478) as reported. However, the results of maedet-m were quite unusual:
{‘box_mAP’: 0.039, ‘box_mAP_50’: 0.067, ‘box_mAP_75’: 0.039, ‘box_mAP_s’: 0.019, ‘box_mAP_m’: 0.048, ‘box_mAP_i’: 0.057, ‘box_mAP_copypast’: '0.039 0.067 0.039 0.019 0.048 0.057'}
When you get a chance, could you please help verify if you get the same outputs for maedet-m?
Thank you very much!
Downloading and Extracting Packages
Preparing transaction: done
Verifying transaction: done
Executing transaction: /
For Linux 64, Open MPI is built with CUDA awareness but this support is disabled by default.
To enable it, please set the environment variable OMPI_MCA_opal_cuda_support=true before
launching your MPI processes. Equivalently, you can set the MCA parameter in the command line:
mpiexec --mca opal_cuda_support 1 ...
In addition, the UCX support is also built but disabled by default.
To enable it, first install UCX (conda install -c conda-forge ucx). Then, set the environment
variables OMPI_MCA_pml="ucx" OMPI_MCA_osc="ucx" before launching your MPI processes.
Equivalently, you can set the MCA parameters in the command line:
mpiexec --mca pml ucx --mca osc ucx ...
Note that you might also need to set UCX_MEMTYPE_CACHE=n for CUDA awareness via UCX.
Please consult UCX's documentation for detail.
done
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.