The goal of this repository is to use the data from "iMet Collection 2019 FGVC6" train multi-label models on image data and improve overall skill associated with image modeling, like deep learning, transfer learning, and others.
- Documentation [link]
- Project working cycle and effort, relevant content and insights [link]
- EDA [link]
- iMet Collection 2019 - EDA & Keras [link]
- Model [link]
Kaggle competition: https://www.kaggle.com/c/imet-2019-fgvc6
The Metropolitan Museum of Art in New York, also known as The Met, has a diverse collection of over 1.5M objects of which over 200K have been digitized with imagery. The online cataloging information is generated by Subject Matter Experts (SME) and includes a wide range of data. These include, but are not limited to: multiple object classifications, artist, title, period, date, medium, culture, size, provenance, geographic location, and other related museum objects within The Met’s collection. While the SME-generated annotations describe the object from an art history perspective, they can also be indirect in describing finer-grained attributes from the museum-goer’s understanding. Adding fine-grained attributes to aid in the visual understanding of the museum objects will enable the ability to search for visually related objects.
This is an FGVCx competition hosted as part of the FGVC6 workshop at CVPR 2019. View the github page for more details.