Name: Cristina Luna Jiménez
Type: User
Company: Universidad Politécnica de Madrid
Bio: PhD student at
GTH (http://gth.die.upm.es/index-e.html) and
GreenLSI (http://greenlsi.die.upm.es/)
Dept. of Electronical Engineering.
Location: Av. Complutense, 30
Cristina Luna Jiménez's Projects
This is Andrew NG Coursera Handwritten Notes.
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
Repository of the paper: Mental-Health Topic Classification employing D-vectors of Large Language Models
Deep learning model trained to automatically identify cover songs using siamese convnets and tied together with a fully-connected sofmax.
Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen, "Compacting, Picking and Growing for Unforgetting Continual Learning," Thirty-third Conference on Neural Information Processing Systems, NeurIPS 2019
CYBEROPS_config
Audio Denoising with Deep Network Priors
“Robust Lightweight Facial Expression Recognition Network with Label Distribution Training”, AAAI, 2021.
Deep Neural Network for joint emotion classification and landmark localization.
Repo for hosting info about subjects and their practises
Deep 3DMM facial expression parameter extraction
Facial Expression Recognition with a deep neural network as a PyPI package
How to track relative popularity of multiple keywords and discover their stories using Pytrends, a Google Trends python library.
Repository to IMDB publication
Repository with the code and experiments performed for the article: nterpreting Sign Language Recognition using Transformers and MediaPipe Landmarks
Premilinary study of trustworthiness annotations at zero acquaintance
Matplotlib Jupyter Integration
Code Release for MeMViT Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition, CVPR 2022
This repository introduces MentaLLaMA, the first open-source instruction following large language model for interpretable mental health analysis.
Repository with the code of the paper: A proposal for Multimodal Emotion Recognition using auraltransformers and Action Units on RAVDESS dataset
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
mPLUG-Owl🦉: Modularization Empowers Large Language Models with Multimodality
Steven C. Y. Hung, Jia-Hong Lee, Timmy S. T. Wan, Chein-Hung Chen, Yi-Ming Chan and Chu-Song Chen. "Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning" 2019 ACM on International Conference on Multimedia Retrieval