Read more here and try out the evaluation code here.
Welcome to the Weights & Biases video segmentation contest!
Your goal is to train a neural network model that can select the foreground object from a video clip like the one below:
This task is known as "primary object segmentation" or "video object segmentation" (VOS).
The quality of segmentations will be assessed using the Intersection over Union (IoU) metric.
To mimic the constraints of designing for limited compute, like mobile devices, you're required to keep your network's parameter count below 50 million. Tools for profiling networks built in Keras and PyTorch are included in the contest repository.
The prizes will be online retail gift certificates. For winners inside the United States, this will be an Amazon gift card.
- First prize - $1000 gift certificate
- Second prize - $500 gift certificate
- Third prize - $250 gift certificate
The contest is open to Qualcomm employees only.
This competition is split into two phases:
- a training phase, where you can train on a public training set and compare your performance to other participants on a public validation set, and
- a test phase, where a test dataset without labels will be provided and participants will submit their solutions to be scored on a private leaderboard.
Prizes will be awarded based on performance during the test phase only. Be careful not to over-engineer your model on the training and validation data! In large public competitions and in industrial machine learning, this kind of over-fitting dooms many promising projects.
The test phase will begin at midnight Pacific time on March 29th, 2021. See the Timeline section below.
- Sign up for W&B using your Qualcomm email. Note: The contest is open to Qualcomm employees only.
- Check out the Colab notebook for your preferred framework
(PyTorch/Lightning,
TensorFlow/Keras for some starter code,
then build on it with your own custom data pipelines, training schemes, and model architectures.
You can develop in Colab or locally (see Installing the
contest
Package below). - Once you're happy with your trained model, produce your formatted results, as described in the Formatting Your Results section below.
- Evaluate those results using the evaluation notebook. See that notebook for details on how results will be scored.
- Submit your evaluation run to the public leaderboard.
Submissions are manually reviewed and will be approved within two business days.
Submitting evaluation runs is a great way to ensure your code runs smoothly on data in the format used in the test phase, that your results are properly formatted, and that your submissions are valid, so make sure to do so!
- Download the video clips for the test data set (link information TBA).
- Run your trained model on that data, producing formatted results, just like in the training phase (see Formatting Your Results below).
- Submit your results run to the private leaderboard (link information TBA).
New to online contests with W&B, deep learning, or video segmentation? No problem! We have posted resources to help you understand the W&B Python library, deep learning frameworks, suitable algorithms, and some articles on neural networks below under the Resources section below.
Questions? Use the #qualcomm-competition slack channel, or email [email protected].
This section provides instructions
for installing the contest
package from
the GitHub repository
for this competition.
There are three versions of the package: one that only installs the core tools, for formatting results and managing dataset paths, and two versions that provide extra tools for getting started in two popular deep learning frameworks.
Check out the starter notebooks (PyTorch, Keras) to see how the package is used.
The package can be installed with
pip
,
the standard package installer for Python:
pip install "git+https://github.com/wandb/davis-contest.git#egg=contest"
To install the contest.keras
or contest.torch
framework subpackages,
provide the name of the framework
at the end of the pip install
command,
using the
optional dependencies syntax:
pip install "git+https://github.com/wandb/davis-contest.git#egg=contest[framework]"
where framework
is one of keras
, torch
, or keras,torch
.
See the starter notebooks (PyTorch, Keras) for more, including screenshots and code, detailing the construction and formatting of the results.
Results are to be submitted in the form of a Weights & Biases Artifact. W&B's Artifacts system (docs) provides methods for storing, distributing, and version-controlling datasets, models, and other large files. Artifacts are also used to distribute the training, validation, and test datasets for this contest. See this video tutorial and associated Colab notebook for more on how to use Artifacts.
We provide utility functions to produce a results artifact from a directory of model outputs in the repository here.
The best way to check that your results are being formatted correctly is to run the submission notebook, look through the table that it uploads to Weights & Biases, and submit the run for approval.
A results artifact must contain at least the following:
- a file called
paths.json
, containing a key"output"
whose value is a dictionary ("object" in JSON lingo) with keys that are integer strings and values that are strings defining paths to files, - at each path, a PNG file representing the model's outputs for the input frame from the dataset with the same integer index. This PNG file should be greyscale/luminance, with each byte representing an unsigned 8-bit integer (the
L
mode in PIL), and - in the metadata, the key
nparams
, counting the number of parameters in the model (including all components).
The paths.json
file can be generated easily
by saving a
pandas DataFrame
with an integer index and a column called "output"
with the
.to_json
method.
See the code in the starter notebooks and repository for examples,
including for how to create the W&B Artifact.
The contents of the PNG files will be used to
See the evaluation code in the contest
package
for details on how results will be scored,
in particular the functions
iou_from_output
and binary_iou
.
- Feburary 16 - Contest announced, training phase begins, public leaderboard opens
- March 29, 12:00am Pacific - training phase ends, test phase begins: test set made available for inference, private leaderboard opens
- March 31, 11:59pm Pacific - test phase ends: private leaderboard closes to new submissions
- Mid-April - Winners announced
- Early May - Retrospective webinar
See Contest Terms & Conditions for details, including eligibility requirements and locations.
- You are free to use any framework you feel comfortable in, but you are responsible for accurately counting parameters.
- You may only submit results from one account.
- You can submit as many runs as you like.
- You can share small snippets of the code online or in our Slack community, but not the full solution -- that means e.g. your GitHub repo should not be public You should keep your Weights & Biases project set to private.
- You may similarly use snippets of code from online sources, but the majority of your code should be original. Originality of solution will be taken into account when scoring submissions. Submissions with insufficient novelty will be disqualified.
These Google Colab notebooks describe how to get started with the contest and submit results.
Notebook | Link |
---|---|
Get Started in PyTorch | |
Get Started in Keras | |
Evaluate Your Results | |
Using Pretrained Networks |
Google Colab is a convenient hosted environment you can use to run the baseline and iterate on your models quickly.
To get started:
- Open the baseline notebook you'd like to work with from the table above.
- Save a copy in Google Drive for yourself.
- To ensure the GPU is enabled, click Runtime > Change runtime type. Check that the "hardware accelerator" is set to GPU.
- Step through each section, pressing play on the code blocks to run the cells.
- Add your own data engineering and model code.
- Review the Getting Started section for details on how to submit results to the public leaderboard.
If you have any questions, please feel free to email us at [email protected]
or join our Slack community
and post in the channel for this competition: #qualcomm-competition
.
- The Weights & Biases docs
- The paper describing the training and validation set
- PapersWithCode benchmark for training and validation set
- A paper on 3DC-Seg, the current best-performing method for this task. NOTE: this method uses almost triple this contest's parameter budget. You can use the approach for inspiration, but you'll need to look for ways to cut the parameter count.
- A paper on MATNet, another state-of-the-art method for this task. NOTE: as above, the parameter count is way above the limit -- 2x.
- A paper on designing ConvNets for mobile devices