Giter VIP home page Giter VIP logo

🚀 Hi, Nice to see you Here!

📞 Contact Information

🎓 Education

  • Bachelor of Technology in Information Technology JSS Academy of Technical Education, U.P. (Aug. 2019 – May 2023)

Relevant Coursework

  • Data Structures
  • Machine Learning
  • Deep Learning
  • Database Management
  • Artificial Intelligence
  • Data Analysis
  • Data Visualization
  • Natural Language Processing (NLP)

💼 Experience

Techoon IT Solutions

AI/ML Developer Intern ✨ Noida, UP | April 2024 – Current

  • Developing Chatbots and application using Custom data - Langchain, RAG & LLMs
  • Analyzing Datasets and Building ML/DL Models
  • Working on Generative AI Projects

Quaintel Research

Research Analyst 📊 Bangalore, KA - Remote | May 2023 – Current

  • Conducted market research and provided insights about market trends.
  • Utilized Excel, Google Analytics, and OpenAI for effective data analysis.

Kaholas

NLP Engineer Intern 🤖 Delhi, DL | Nov 2022 – Feb 2023

  • Executed NLP algorithms for text classification and sentiment analysis.
  • Collaborated with the development team to reduce project development time.

Maxgen Technologies

Data Analyst Intern 🔍 Remote, DL | Apr 2022 – Jul 2022

  • Analyzed data and built predictive models using Python.
  • Created interactive dashboards in PowerBI.

💻 Technical Skills

  • Languages: Python, R, C++, PHP, HTML/CSS, SQL
  • Developer Tools: Anaconda, VS Code, Google Cloud Platform, Jupyter, Apache Hadoop, KNIME, Excel, MS Azure
  • Technologies/Frameworks: AWS, Amazon Machine Learning (AML), Amazon Redshift, Linux, GitHub

shikhar aryan's Projects

annotating-multiple-pdfs- icon annotating-multiple-pdfs-

This project aims to automate the process of highlighting specific keywords within PDF documents. Leveraging the PyMuPDF library (fitz module), along with pandas for handling keyword data stored in CSV format, the script systematically searches through each page of the PDF documents, identifies the keywords, and highlights them.

bank_loan_customer_calls icon bank_loan_customer_calls

This project involves the analysis of two datasets: a portfolio dataset containing information about loan portfolios, and a communication history dataset containing information about communication interactions with customers. The aim of the project is to gain insights into customer behavior and loan portfolio trends.

biometric-based-voting-machine- icon biometric-based-voting-machine-

This project is a Voting Machine application built with Python and MySQL. It verifies voters' Aadhar card numbers, checks their eligibility to vote, and integrates with a finger recognition module for authentication. The user-friendly interface is created using Tkinter, ensuring smooth interaction for voters.

car_selling_price_predictor icon car_selling_price_predictor

This project develops a machine learning model to predict car selling prices based on various features. It includes data preprocessing, exploratory data analysis (EDA), and model training using Random Forest Regressor and Linear Regression algorithms. The Random Forest model achieves higher accuracy and is saved for future use.

chatbot-gpt-3.5-turbo-based- icon chatbot-gpt-3.5-turbo-based-

Personal Chatbot Powered by GPT-3.5" is a Flask web application that utilizes OpenAI's powerful GPT-3.5 model to create a conversational chatbot experience. Users can interact with the chatbot in natural language, receiving responses based on the context of the conversation.

heart-attack-risk-prediction-model---random-forest icon heart-attack-risk-prediction-model---random-forest

This notebook explores and analyzes the Heart Disease UCI dataset using Python libraries such as NumPy, Pandas, Matplotlib, Seaborn, and scikit-learn. It includes data visualization, feature engineering, model building using Random Forest Classifier, and evaluation of the model's performance in predicting the presence or absence of heart disease.

online-payment-fraud-detection-model-eda-classification icon online-payment-fraud-detection-model-eda-classification

This project implements an online payment fraud detection system, combining exploratory data analysis (EDA) and classification using a decision tree classifier. The model achieves high accuracy in identifying fraudulent transactions, providing valuable insights for secure online transactions.

recommendation-system-collaboratve-filtering- icon recommendation-system-collaboratve-filtering-

Collaborative Filtering based Recommendation System: This repository contains the implementation of a collaborative filtering-based recommendation system for book recommendations. It utilizes user-item interactions to suggest books that users might be interested in, enhancing personalized user experiences in the domain of book recommendation.

spotify_streams_prediction_model icon spotify_streams_prediction_model

This project builds a predictive model for music track popularity using machine learning. It also conducts Exploratory Data Analysis (EDA) to understand the dataset's patterns and correlations.

text-classification-transformer-model- icon text-classification-transformer-model-

This sentiment analysis model utilizes a Transformer architecture to classify text sentiment into positive, negative, or neutral categories with high accuracy. It preprocesses text data, trains the model on the IMDB dataset, and effectively predicts sentiment based on user input.

text-generation-and-summarization-using-pipelines- icon text-generation-and-summarization-using-pipelines-

This project demonstrates sentiment analysis and text generation tasks using Hugging Face's NLP framework. It showcases the preprocessing of text data, sentiment analysis with pipelines, and text generation using the GPT-2 model on a dataset of news articles. It serves as a tutorial for implementing NLP tasks efficiently with Hugging Face's tools.

web-scraping---store-details-geolocation icon web-scraping---store-details-geolocation

This project involves scraping the location details of V-Mart retail stores including address, area, contact number, coordinates, timing, and directions from the official V-Mart website. It utilizes web scraping techniques with Python libraries such as requests, BeautifulSoup, and pandas.

web_scrapping-basic-technique icon web_scrapping-basic-technique

This assignment involves web scraping data from a Singapore government website using Python libraries like BeautifulSoup and requests. The scraped data includes names, citizenships, and dates of birth of individuals mentioned in a table on the website. After scraping the data, it is processed and saved into a CSV file for further analysis or usage.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.