PRO env. [Version 1.1.0](/#Control Versioning)
This final work shows a viable data architecture solution to address the fraud detection use case of the insurance sector, in the same way to see how to analyse the features for the creation of a fraud prediction model. We will proceed with the design of the architecture, analyse the data pipeline in two temporal moments, batch mode and streaming mode, and finally, shows the results. As complement of that, it will be the comparison between different data treatment processes, study the different ways to consider the start-up phase and to detect possible risks. The aim of the work is to be able to combine data architecture, opting for a hybrid method between Lambda and Kappa architecture, in addition using microservices technology based on Docker and machine learning monitoring by MLOps methods and GitHub workflow.
- Lambda Architecture
- Kappa Architecture
- Dockerfile // Docker build
- Spark -> master 7077, 4044 - workers 8081, 8082, 8083
- Kafka standalone -> 9091
- Kafka by Confluent stack -> 19092 / 29092 / 9092
- MongoDB -> 27017, 27018, 27019
- PostgreSQL -> 5432, 15432, 25432
- NodeJS
- Grafana -> 3000
- Prometheus -> 9090
- R Studio (pending due version conflict server/livy on Spark 3.1.1)
- JupyterLab Notebook -> 8888
- Zeppelin Notebook -> 7081
- MLflow -> 5000
- Superset (pending due port conflict) -> 8088
- HBase (next release)
- Hive (next release)
- Cassandra (next release)
- Druid (next release)
- Examples on Spark
- Examples Kafka to Spark
- Kafka consumer and producer on PySpark
- Porto Seguro's prediction claim on Python
- Porto Seguro's prediction claim on PySpark
- Porto Seguro's prediction claim on Databricks
- Machine Learning MLflow
- Streaming processing (pending)
- ML codes on Python
test made on VM Linux
- Clone master repository
$ git clone https://<repository-url>
- Check Docker-Compose or install
$ apt install docker-compose
- Start main cluster Kafka and Spark (Simulation)
$ docker-compose -f docker-compose-cluster-spark-kafka.yml up -d
- Check Docker containers
$ docker-compose ps -a
or
$ docker ps -a
- Check Docker logs
$ docker-compose -f <docker-compose-file>.yml logs
- Start Docker-Compose
# Start Confluent services
$ docker-compose -f docker-compose-confluent-kafka.yml up -d
# Start Spark services
$ docker-compose -f docker-compose-cluster-spark.yml up -d
# Start MongoDB replicas
$ docker-compose -f docker-compose-mongodb.yml up -d
- Start Control Center
:9021
- broker:29092
- :9092
- Start Jupyter Notebook
:8888
- Launch docker-compose command
$ docker-compose -f docker-compose-notebooks.yml up -d
Inside Jupyter environment execute:
- Start Spark notebook on JupyterLab
<hostname_virtual_machine>.8888
- Check Spark Master
<hostname_virtual_machine>.8080
- Check Spark Worker.n
<hostname_virtual_machine>.8081
- Start JupyterLab notebooks
Upload notebooks
from /workspace/TFM/ directory
- Check notebooks for theme
-
[Porto_Seguros]_PredictionModel_Python_052021_v1_0_0.ipynb
-
[Porto_Seguros]_PredictionModel_PySpark_062021_v1_0_0.ipynb
-
spark-kafka-consumer.ipynb
-
spark-kafka-producer.ipynb
[## Control Versioning]
# - 1.1.0 > 01.07.2021 (add Zeppelin notebook, change Superset port)
#
#
#