duanyiqun / auto-reid-fast Goto Github PK
View Code? Open in Web Editor NEWA pytorch implementation of using DARTS to search better structure for Re-ID
A pytorch implementation of using DARTS to search better structure for Re-ID
what is your_partition and your_node_num? please explain.
Hi, I don't have the resources to train, It will be helpful if you could share the trained model.
when I run this command
srun -n 128 --gres 1 -p 0.5 python train_baseline_search_triplet.py --distributed True --config configs/Retrieval_classification_DARTS_distributed_triplet.yaml
(I do not know what are the -n -p and the number of them)
it has this problem:
bash: srun: command not found
how to fix it? Thanks
Hello! Thank you for your contribution.
I would like to know that in distributed computing, does triplet loss communicate with each other?
That is, the hardest positive(negative) sample is selected in their process or in all processes?
Hi, Duan.
Thank you for your fantastic work. But I had a little problem.
I don't find the code for trains and tests the model structure searched, so I modified a code based on “a strong baseline for reid”. But the effect of the experiment is very poor. Can you please tell me the way to train the model structure searched or give me the code for train. If it is not convenient to open, can you send it to my email [email protected].
Hi, I am wondering why the inputs of triplet loss are the same as cross entropy loss?
def __init__(self, lamb = 0.5):
super().__init__()
self.lamb = lamb
self.cross_entropy = cross_entropy()
self.triplet_loss = TripletLoss()
def forward(self, inputs, labels):
return self.lamb*self.cross_entropy(inputs, labels) + (1-self.lamb) * self.triplet_loss(inputs, labels)
The inputs of cross entropy loss should be logits, while the inputs of triplet loss should be features.
How does the triplet loss work in your code? I cannot find the exact implementation.
Hello, Duan.I am trying to repo this code .There are some questions when I use distributed. I can't use this command:
srun -n your_node_nums --gres gpu:gpunums -p your_partition
Error:
The program 'srun' is currently not installed. To run 'srun' please ask your administrator to install the package 'slurm-client'
So when CUDA out of memory
, what should I do to solve this problem. Looking forward to your help!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.