Giter VIP home page Giter VIP logo


Visitor Count:

KNTU::MATH➕(Graduated) {
Graphic designer 🍥
Religious researcher🛐
Patriot 🇮🇷🇩🇪🇺🇸
Gamer 🎮
}
KHU::Data miner 👨‍💻(started)
{
TDA🔍
Tableau📊
}

💻 Tech Stack:

My Skills

C++ Python HTML5 Anaconda MySQL NumPy Plotly Pandas scikit-learn Google Cloud

📊 GitHub Stats:


Armin SabourMoghaddam's Projects

10000-most-popular-english-movies-2023- icon 10000-most-popular-english-movies-2023-

🎬 Welcome to the Popular English Movies Dataset (2023) 🎬! This dataset features information on a diverse collection of popular English movies. The Popular English Movies Dataset (2023) offers a wealth of opportunities for exploration and innovation in the realms of Data Science and Machine Learning.

adventureworks-sql-database icon adventureworks-sql-database

Welcome to the AdventureWorks Sample Database repository! AdventureWorks is a widely-used sample database from Microsoft that represents a fictional bicycle manufacturer. It serves as an excellent resource for learning and practicing database management and SQL querying. This database is an invaluable tool for developers, database administrators...

ann-churn_modelling icon ann-churn_modelling

This data set contains details of a bank's customers and the target variable is a binary variable reflecting the fact whether the customer left the bank (closed his account) or he continues to be a customer.

binary-classification-screenshots icon binary-classification-screenshots

This repository contains an AI model trained to distinguish between screenshots and real pictures. The model is based on an improved version of the VGG architecture and has been trained and evaluated on a dataset of screenshots and real images.

cnn_mnist icon cnn_mnist

The MNIST database (Modified National Institute of Standards and Technology database[1]) is a large database of handwritten digits that is commonly used for training various image processing systems.[2][3] The database is also widely used for training and testing in the field of machine learning.[4][5] It was created by "re-mixing" the samples from NIST's original datasets.[6] The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments.[7] Furthermore, the black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced grayscale levels.[7] The MNIST database contains 60,000 training images and 10,000 testing images.[8] Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset.[9] The original creators of the database keep a list of some of the methods tested on it.[7] In their original paper, they use a support-vector machine to get an error rate of 0.8%.[10] Extended MNIST (EMNIST) is a newer dataset developed and released by NIST to be the (final) successor to MNIST.[11][12] MNIST included images only of handwritten digits. EMNIST includes all the images from NIST Special Database 19, which is a large database of handwritten uppercase and lower case letters as well as digits.[13][14] The images in EMNIST were converted into the same 28x28 pixel format, by the same process, as were the MNIST images. Accordingly, tools which work with the older, smaller, MNIST dataset will likely work unmodified with EMNIST.

data-structure icon data-structure

Welcome to the Data Structures Homework Repository! This repository serves as a platform for students to submit their homework assignments and for instructors to evaluate them. The website facilitates the submission of assignments and provides test cases to verify the correctness of the solutions.

epidemic-calculator icon epidemic-calculator

The Epidemic Calculator is a simple GUI application written in Python that utilizes the SEAIR model (Susceptible → Exposed → Asymptomatic → Infected → Removed) to simulate the spread of infectious diseases. It aims to provide a tool for understanding and contextualizing epidemiological parameters and forecasts, particularly in the context.

game-of-life icon game-of-life

The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970.[1] It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a universal constructor or any other Turing machine.

hand-gesture-controlled-volume-python- icon hand-gesture-controlled-volume-python-

This Python script uses computer vision and the MediaPipe library to detect hand gestures from a webcam feed and control the system's audio volume based on the hand gesture. By performing specific hand gestures in front of your webcam, you can easily adjust your computer's audio volume without touching any buttons.

insurance-claim-visualization icon insurance-claim-visualization

This repository contains resources and instructions for creating an Insurance Claim Prediction dashboard using Power BI. Power BI is a powerful data visualization and business intelligence tool that allows you to analyze, visualize.

knn icon knn

K-Nearest Neighbors for Machine Learning:

mlp-multilayer-perceptron-_neural_network icon mlp-multilayer-perceptron-_neural_network

This is the python code of Neural Network using forward propagation for calculating nets and activation functions and also back propagation with delta rule for upgrading the weights

naive-bayes-classifier icon naive-bayes-classifier

Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. There is not a single algorithm for training such classifiers, but a family of algorithms based on a common principle: all naive Bayes classifiers assume that the value of a particular feature is independent of the value of any other feature, given the class variable. For example, a fruit may be considered to be an apple if it is red, round, and about 10 cm in diameter. A naive Bayes classifier considers each of these features to contribute independently to the probability that this fruit is an apple, regardless of any possible correlations between the color, roundness, and diameter features. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods.

predicting-academic-performance icon predicting-academic-performance

Abstract: In the past few years, educational data mining (EDM) has attracted the attention of researchers to enhance the quality of education. Predicting student academic performance is crucial to improving the value of education. Some research studies have been conducted which mainly focused on prediction of students’ performance at higher education. However, research related to performance prediction at the secondary level is scarce, whereas the secondary level tends to be a benchmark to describe students’ learning progress at further educational levels. Students’ failure or poor grades at lower secondary negatively impact them at the higher secondary level. Therefore, early prediction of performance is vital to keep students on a progressive track. This research intended to determine the critical factors that affect the performance of students at the secondary level and to build an efficient classification model through the fusion of single and ensemble-based classifiers for the prediction of academic performance. Firstly, three single classifiers including a Multilayer Perceptron (MLP), J48, and PART were observed along with three wellestablished ensemble algorithms encompassing Bagging (BAG), MultiBoost (MB), and Voting (VT) independently. To further enhance the performance of the abovementioned classifiers, nine other models were developed by the fusion of single and ensemble-based classifiers. The evaluation results showed that MultiBoost with MLP outperformed the others by achieving 98.7% accuracy, 98.6% precision, recall, and F-score. The study implies that the proposed model could be useful in identifying the academic performance of secondary level students at an early stage to improve the learning outcomes. Keywords: educational data mining; supervised learning; secondary education; academic performance 1. Introduction Educational data mining (EDM) is a growing area of research that is being used to explore educational data for different academic purposes. The main application of EDM is the prediction of students’ academic performance [1,2]. In data mining, the analysis and interpretation of student academic performance are regarded as suitable analysis, evaluation, and assessment tools [3]. In the present era of a knowledge economy, the students are the key element for the socio-economic growth of any country, so keeping their performance on track is essential. Data mining (DM) methods are applied to learn hidden knowledge and patterns which assist administrators and academicians in decision making regarding the delivery of instructions. DM techniques have applications in numerous areas including retail business, the health sector, marketing, banking, bioinformatics, counterterrorism, and many others are also using it to enhance productivity and efficiency

province-separator icon province-separator

I've created a Python application using Tkinter to separate data from one Excel file into multiple Excel files based on province names.

random_forest icon random_forest

Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned.[1][2] Random decision forests correct for decision trees' habit of overfitting to their training set.[3]: 587–588  Random forests generally outperform decision trees, but their accuracy is lower than gradient boosted trees. However, data characteristics can affect their performance.[4][5] The first algorithm for random decision forests was created in 1995 by Tin Kam Ho[1] using the random subspace method,[2] which, in Ho's formulation, is a way to implement the "stochastic discrimination" approach to classification proposed by Eugene Kleinberg.[6][7][8] An extension of the algorithm was developed by Leo Breiman[9] and Adele Cutler,[10] who registered[11] "Random Forests" as a trademark in 2006 (as of 2019, owned by Minitab, Inc.).[12] The extension combines Breiman's "bagging" idea and random selection of features, introduced first by Ho[1] and later independently by Amit and Geman[13] in order to construct a collection of decision trees with controlled variance. Random forests are frequently used as "blackbox" models in businesses, as they generate reasonable predictions across a wide range of data while requiring little configuration.

strgnn icon strgnn

Code for structural temporal graph neural networks for anomaly detection in dynamic graphs

t-test_for_1and0_values icon t-test_for_1and0_values

Target: To check if the difference between the average (mean) of two groups (populations) is significant, using sample data Example1: A man of average is expected to be 10cm taller than a woman of average (d=10) Example2: The average weight of an apple grown in field 1 is expected to be equal in weight to the average apple grown in field 2 (d=0)

titanic---machine-learning-from-disaster icon titanic---machine-learning-from-disaster

The sinking of the Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the widely considered “unsinkable” RMS Titanic sank after colliding with an iceberg. Unfortunately, there weren’t enough lifeboats for everyone onboard, resulting in the death of 1502 out of 2224 passengers and crew.

tumor-classification icon tumor-classification

Brain tumor is a severe cancer and a life-threatening disease. Thus, early detection is crucial in the process of treatment. Recent progress in the field of deep learning has contributed enormously to the health industry medical diagnosis. Convolutional neural networks (CNNs) have been intensively used as a deep learning approach to detect brain tumors using MRI images. Due to the limited dataset, deep learning algorithms and CNNs should be improved to be more efficient. Thus, one of the most known techniques used to improve model performance is Data Augmentation. This paper presents a detailed review of various CNN architectures and highlights the characteristics of particular models such as ResNet, AlexNet, and VGG. After that, we provide an efficient method for detecting brain tumors using magnetic resonance imaging (MRI) datasets based on CNN and data augmentation. Evaluation metrics values of the proposed solution prove that it succeeded in being a contribution to previous studies in terms of both deep architectural design and high detection success

using-an-avl-tree-to-handle-scheduling-airplane-landings icon using-an-avl-tree-to-handle-scheduling-airplane-landings

This code implements an efficient data structure, namely an AVL tree, to handle scheduling airplane landing times without conflicts. The project was developed as part of a Data Structure course. The data structure is self-balancing and maintains low time complexity.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.