Naman Sharma's Projects
(REINFORCEMENT LEARNING) : We are given a dataset that contains information about the ads clicked by the visitors at each visit to a webpage (amongst 10 different ads). Our Task is to find the most viewed ad i.e ad having the highest distribution of the viewers in Minimum number of Rounds and Resources. Here I have used "Upper Confidence Bound" and "Thompson Sampling" models to get the insights.
(SQL): This project uses SQL and Databases of Hospitals and Resources present in them to counter the COVID need of people efficiently using multiple Table Join and Complex Queries
Dr Intelligent, AI Doctor
(REGRESSION): This dataset contains record of 7 common but different fish species in fish market sale. With this dataset, I have trained a Regression model that can predict the weight of fish with R-2 score of 0.9813
(CLASSIFICATION): This Dataset contains Information about 39221 transaction. After Trying/Training models like Naive Bayes/Decision Tree etc. Finally I was able to get 100% accuracy with Random Forest Classification as it was able to Segregate 0(non Fraudulent) & 1(fraudulent) accurately in leaf Nodes, based on 'Entropy' Criterion.
Here I have created a Spider using Scrapy that will, Crawl the web (the entered google search link) and will extract all the Youtube links(channel) present on the page, and will store them in CSV file, I have used Scrapy Python Framework for this purpose
(CLASSIFICATION): Given data of 303 Patient, we have to predict if He/she suffers from Heart Disease. So after training various Classification, with KNN models I was able to achieve highest K fold Score of 86.9 % and Standard Deviation of 9.9%
(REGRESSION): The Data contains medical information & costs billed by Insurance Company of 1338 person. Aim was to accurately predict the insurance cost, I was able to train a Predictive model with R2 score of 0.857
(CLUSTERING): Our objective is to categorize Iris flowers into 3 sub groups, with 50 samples for each (versicolor, virginica, setosa). Using K means & Agglomerative Clustering I was able to completely segregate them into 3 different breeds. with Accuracy Score of 1.0
(ASSOCIATION RULE LEARNING) : We are given with 7501 transaction of a Famous Commodity Shop. We have to find the Best Suited Pair for Buy 1 Get 1 offer, that is to be given by the Shop Owner. So here I have used APRIORI Model to select most Relevant Pairs.
Config files for my GitHub profile.
It is a python program to create "TAMBOLA" Game-Play using Text to Speech Library. (pyttsx3)