Giter VIP home page Giter VIP logo

quantumudit / analyzing-pokemons Goto Github PK

View Code? Open in Web Editor NEW
2.0 1.0 0.0 3.99 MB

This project focuses on scraping data related to Pokémons from a complete Pokédex; performing necessary transformations on the scraped data and then analyzing & visualizing it using Jupyter Notebook and Power BI.

License: Other

Python 2.29% Jupyter Notebook 97.71%
python data-visualization power-bi jupyter-notebook data-analytics webscraping

analyzing-pokemons's Introduction

Project Logo


Catching & Analyzing Pokémons with Python and Power BI

built-with-love powered-by-coffee cc-nc-sa

OverviewPrerequisitesArchitectureDemoSupportLicense

Overview

This project is centered on scraping Pokémon data from the Pokémon Database, followed by conducting exploratory data analysis (EDA) to uncover insights, which are then visualized using Power BI.

Below is a snapshot of the target website:

Website Snapshot

The workflow involves using Python for web scraping to extract relevant information from the target site, performing necessary data transformations, and then visualizing and reporting the findings in Power BI.

The Power BI report provides a valuable tool for users to analyze statistics of various Pokémon, including their favorites, and compare them with other Pokémon.

Here is a snapshot of the Power BI report:

Power BI Report Snapshot

The project repository exhibits the following structure:

Analyzing-Pokemons/
├── 📁.github
├── 📁conf
├── 📁data/
│   ├── 📁external
│   ├── 📁processed
├── 📁notebooks
├── 📁src/
│   ├── 📁components
│   ├── 📁pipelines
│   ├── 📁utils
│   ├── 🐍constants.py
│   ├── 🐍exception.py
│   └── 🐍logger.py
├── 📁logs
├── 📁reports
├── 📁resources
├── 🐍main.py
├── 🐍template.py
├── 🔒poetry.lock
├── 📇pyproject.toml
├── 🗒️requirements.txt
├── 📜.gitignore
├── 🔑LICENSE
└── 📝README.md
💡 Repository Structure Details

To help you navigate through the project, here’s a concise guide to the repository’s structure, detailing what each directory contains and its purpose within the project:

  • 📁.github - Contains GitHub-related configuration files like workflows for CI/CD.
  • 📁conf - Configuration files and schema for the project.
  • 📁data/
    • 📁external - Data extracted from external data source(s).
    • 📁processed - Data that has been cleaned and transformed for analysis.
  • 📁notebooks - Jupyter notebooks for exploratory data analysis and model experimentation.
  • 📁src/
    • 📁components - Modular components used across the project.
    • 📁pipelines - Data processing and machine learning pipelines.
    • 📁utils - Utility scripts for common tasks throughout the project.
    • 🐍constants.py - Central file for constants used in the project.
    • 🐍exception.py - Custom exception classes for error handling.
    • 🐍logger.py - Logging configuration and setup.
  • 📁logs - Contains auto-generated logs for event and error tracking, not included in Git.
  • 📁reports - Generated analysis reports and insights.
  • 📁resources - Additional resources like images or documents used in the project
  • 🐍main.py - Script to orchestrates the project's workflow. It sequentially executes the pipeline scripts
  • 🐍template.py - Template script for standardizing code structure.
  • 🔒poetry.lock - Lock file for Poetry to ensure reproducible builds.
  • 📇pyproject.toml - Poetry configuration file for package management.
  • 🗒️requirements.txt - List of Python package requirements.
  • 📜.gitignore - Specifies intentionally untracked files to ignore.
  • 🔑LICENSE - The license file for the project.
  • 📝README.md - The introductory documentation for the project.

Prerequisites

Tech Stack Prerequisites

Python Numpy Pandas Matplotlib PowerBI

To effectively engage with this project, possessing a robust understanding of the skills listed below is advisable:

  • Core comprehension of Python, Web Scraping, and Modular programming
  • Acquaintance with data modelling, DAX and Power BI
  • Acquaintance with the Python libraries specified in the 🗒️requirements.txt document

These competencies will facilitate a seamless and productive journey throughout the project.

Development Environment Prerequisites

Anaconda Poetry VS_code Jupyter_Notebook PowerBI Notepad_plus_plus Obsidian Figma Clickup

Application selection and setup may vary based on individual preferences and system setups.

The development tools I've employed for this project are:

  • Anaconda / Poetry: Utilized for distribution and managing packages
  • VS Code: Employed for writing and editing code
  • Jupyter Notebook: Used for data analysis and experimentation
  • Power BI Desktop: Used for data modeling and visualization
  • Notepad++: Served as an auxiliary code editor
  • Obsidian: Utilized for documenting project notes
  • Figma: Used for crafting application UI/UX designs
  • Click Up: Employed for overseeing project tasks

Automation Integration Necessities

GitHubActions

Integrating process automation is entirely elective, as is the choice of the automation tool.

In this project, GitHub Actions has been selected to automate the web scraping and data transformation process as needed.

Should there be a need to adjust data-related settings, simply update the YAML configurations, and the entire development workflow can be executed directly from the repository.

Note: The website may undergo changes in the future, necessitating adjustments to the web scraping script. As a result, the scripts are not completely future-proof and may need to be updated if the website alters its content or presentation.

Architecture

The architectural design of this project is transparent and can be readily comprehended with the assistance of the accompanying diagram illustrated below:

Process Architecture

The project's architectural framework encompasses the following key steps:

Web Scraping

This step involves extracting relevant data from a specified website using Python's web scraping modules. These modules helps navigate the website's structure to collect required information efficiently, ensuring that the data is accurately captured and ready for subsequent analysis.

Data Transformation

Once the data is scraped, it undergoes a series of transformations to clean and prepare it for analysis. This process involves handling missing values, correcting data types, filtering out irrelevant data, and restructuring the dataset to align with analytical goals. By doing so, the data becomes suitable for accurate analysis and visualization.

Process Automation

The web scraping and data transformation steps are automated using GitHub Actions. This automation allows the process to be executed seamlessly and consistently without manual intervention. The setup ensures that data extraction and preparation can be performed on-demand, enhancing efficiency and scalability.

Data Analysis & Visualization

During this phase, the transformed dataset undergoes analysis to derive meaningful insights and address specific user queries.

A range of analytical techniques is applied to interpret the data, and the findings are showcased through interactive visualizations in Power BI.

The Power BI dashboard is equipped with slicers, cross-filtering, and other advanced features, allowing users to interact with the data, explore specific aspects, and uncover additional insights.

This dashboard offers a user-friendly and engaging interface for exploring data insights, enabling informed decision-making based on the analysis.

Demo

The following illustration demonstrates the interactive Power BI report to explore insights from the data:

Power BI Report

Access the Power BI report by clicking here: Power BI Report

Support

Should you wish to inquire, offer feedback, or propose ideas, don’t hesitate to contact me via the channels listed below:

Linkedin Badge Twitter Badge Gmail Badge

Discover and engage with my content on these platforms:

Linktree Badge Youtube Badge GitHub Badge Medium Badge Substack Badge

To express your support for my work, consider buying me a coffee or, donate through Paypal

Buy Me a Coffee Paypal

License

by-nc-sa

This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.


topmate-udit


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.