This project showcases PySpark's capabilities for streaming data analysis on a set of CSV files. It guides you through the steps of creating a streaming DataFrame, performing data transformations, and writing the results to output files.
Before you start, ensure you have the following:
-
Python: Make sure Python is installed on your system, preferably Python 3.x.
-
Apache Spark and PySpark: Install Apache Spark and PySpark to efficiently work with streaming data. Follow the installation instructions on the Apache Spark website.
-
CSV Files: Prepare a directory containing the CSV files for streaming and analysis. In this project, we use financial data (e.g., stock prices) in CSV format.
Here's a brief overview of the project structure:
-
README.md: The documentation you are currently reading.
-
Streaming_Practical_Session: The Python script that performs PySpark streaming data analysis.
-
Define the Schema: Create a schema for the streaming data by specifying column names and data types.
-
Create the Streaming DataFrame: Use PySpark to create a streaming DataFrame by reading data from the specified directory. Ensure the data source format is set to "csv" and provide the defined schema.
-
Check Streaming Status: Verify that the streaming DataFrame is correctly configured for streaming data. You should see a True output for
df.isStreaming
. -
Create Stream Writer: Set up a stream writer to handle the data, using an in-memory writer for intermediate results.
-
Start the Write Stream: Begin the stream writing process and ensure it's working correctly. You can check by running queries to display the data.
-
Data Preprocessing: Perform data preprocessing tasks, such as removing rows with all null values and creating a new column to calculate the difference between "High" and "Low" prices.
-
Create a New Stream Writer: Set up a new stream writer for the modified data, and start the write stream.
-
Write to Files: Instead of writing to memory, write the generated data into output files, specifying the output path and checkpoint location.
-
Stop the Query: Stop the streaming query once the data is written to files.
-
Read Generated Files: Read the data from the generated output files using a predefined schema. Sort the DataFrame based on the "ID" column.
The project results include:
-
Streaming data analysis on CSV files.
-
Data preprocessing and transformation.
-
Writing the modified data to output files.
Access the results in the generated output files located in the "outputstream" directory.
If you find any issues, have suggestions for improvements, or would like to contribute, please feel free to open an issue or create a pull request. We welcome collaboration and contributions from the community.