Giter VIP home page Giter VIP logo

sparkstream-analytics's Introduction

SparkStream Analytics

PySpark Logo

Overview

This project showcases PySpark's capabilities for streaming data analysis on a set of CSV files. It guides you through the steps of creating a streaming DataFrame, performing data transformations, and writing the results to output files.

Prerequisites

Before you start, ensure you have the following:

  • Python: Make sure Python is installed on your system, preferably Python 3.x.

  • Apache Spark and PySpark: Install Apache Spark and PySpark to efficiently work with streaming data. Follow the installation instructions on the Apache Spark website.

  • CSV Files: Prepare a directory containing the CSV files for streaming and analysis. In this project, we use financial data (e.g., stock prices) in CSV format.

Project Structure

Here's a brief overview of the project structure:

  • README.md: The documentation you are currently reading.

  • Streaming_Practical_Session: The Python script that performs PySpark streaming data analysis.

Getting Started

  1. Define the Schema: Create a schema for the streaming data by specifying column names and data types.

  2. Create the Streaming DataFrame: Use PySpark to create a streaming DataFrame by reading data from the specified directory. Ensure the data source format is set to "csv" and provide the defined schema.

  3. Check Streaming Status: Verify that the streaming DataFrame is correctly configured for streaming data. You should see a True output for df.isStreaming.

  4. Create Stream Writer: Set up a stream writer to handle the data, using an in-memory writer for intermediate results.

  5. Start the Write Stream: Begin the stream writing process and ensure it's working correctly. You can check by running queries to display the data.

  6. Data Preprocessing: Perform data preprocessing tasks, such as removing rows with all null values and creating a new column to calculate the difference between "High" and "Low" prices.

  7. Create a New Stream Writer: Set up a new stream writer for the modified data, and start the write stream.

  8. Write to Files: Instead of writing to memory, write the generated data into output files, specifying the output path and checkpoint location.

  9. Stop the Query: Stop the streaming query once the data is written to files.

  10. Read Generated Files: Read the data from the generated output files using a predefined schema. Sort the DataFrame based on the "ID" column.

Results

The project results include:

  • Streaming data analysis on CSV files.

  • Data preprocessing and transformation.

  • Writing the modified data to output files.

Access the results in the generated output files located in the "outputstream" directory.

Contributing

If you find any issues, have suggestions for improvements, or would like to contribute, please feel free to open an issue or create a pull request. We welcome collaboration and contributions from the community.

sparkstream-analytics's People

Contributors

heba106 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.