Name: Ricardo J S Cruz
Type: User
Bio: Hello!! 😄
Sou o Ricardo Cruz.
Estudante de Estatística e MBA em Data Science.
Meu foco aqui são projetos em Analise de Dados e Machine Learning.
Twitter: ricardojscruz
Location: Niteroi, Rio de Janeiro BRASIL
Blog: https://www.linkedin.com/in/ricardojscruz/
Ricardo J S Cruz's Projects
📚 Papers & tech blogs by companies sharing their work on data science & machine learning in production.
Machine Learning University: Accelerated Tabular Data Class
Machine Learning University: Decision Trees and Ensemble Methods
Repositório para os arquivos do projeto da calculadora de imóveis da Awari
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
A classifier that predicts churn probability of a costumer.
https://awarischool.github.io/colorizando-fotos/
Repository for data science book
Plugin to integrate Learning to Rank (aka machine learning for better relevance) with Elasticsearch
Deep neural network to extract intelligent information from invoice documents.
Material Disponibilizado para Laboratórios do Curso de Big Data
Aulas de Mineração de Texto e Processamento de Linguagem Natural
A project-based course on the foundations of MLOps with a focus on intuition and application.
Turn on your webcam. Mona Lisa's eyes will follow you around.
The Official PyTorch Implementation of "NVAE: A Deep Hierarchical Variational Autoencoder"
Python code for "Machine learning: a probabilistic perspective" (2nd edition)
base
A soft-label k-Nearest Neighbors classifier.
💫 Industrial-strength Natural Language Processing (NLP) in Python
VIP cheatsheets for Stanford's CS 229 Machine Learning
We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.