Giter VIP home page Giter VIP logo

final-project-cluster-on-docker's Introduction

Final Project - Cluster on Docker

PRO env. [Version 1.1.0](/#Control Versioning)

Abstract

This final work shows a viable data architecture solution to address the fraud detection use case of the insurance sector, in the same way to see how to analyse the features for the creation of a fraud prediction model. We will proceed with the design of the architecture, analyse the data pipeline in two temporal moments, batch mode and streaming mode, and finally, shows the results. As complement of that, it will be the comparison between different data treatment processes, study the different ways to consider the start-up phase and to detect possible risks. The aim of the work is to be able to combine data architecture, opting for a hybrid method between Lambda and Kappa architecture, in addition using microservices technology based on Docker and machine learning monitoring by MLOps methods and GitHub workflow.


Architecture Design

  • Lambda Architecture
  • Kappa Architecture

Implementation and configuration

  • Dockerfile // Docker build

Microservices and ports

  • Spark -> master 7077, 4044 - workers 8081, 8082, 8083
  • Kafka standalone -> 9091
  • Kafka by Confluent stack -> 19092 / 29092 / 9092
  • MongoDB -> 27017, 27018, 27019
  • PostgreSQL -> 5432, 15432, 25432
  • NodeJS
  • Grafana -> 3000
  • Prometheus -> 9090
  • R Studio (pending due version conflict server/livy on Spark 3.1.1)
  • JupyterLab Notebook -> 8888
  • Zeppelin Notebook -> 7081
  • MLflow -> 5000
  • Superset (pending due port conflict) -> 8088

Other databases next release

  • HBase (next release)
  • Hive (next release)
  • Cassandra (next release)
  • Druid (next release)

Jupyter notebooks, code, Proof of Concept

  • Examples on Spark
  • Examples Kafka to Spark
  • Kafka consumer and producer on PySpark
  • Porto Seguro's prediction claim on Python
  • Porto Seguro's prediction claim on PySpark
  • Porto Seguro's prediction claim on Databricks
  • Machine Learning MLflow
  • Streaming processing (pending)
  • ML codes on Python

Guide available here

SETUP Git & Docker Compose last version on

test made on VM Linux

  1. Clone master repository
$ git clone https://<repository-url>
  1. Check Docker-Compose or install
$ apt install docker-compose
  1. Start main cluster Kafka and Spark (Simulation)
$ docker-compose -f docker-compose-cluster-spark-kafka.yml up -d
  1. Check Docker containers
$ docker-compose ps -a
or
$ docker ps -a
  1. Check Docker logs
$ docker-compose -f <docker-compose-file>.yml logs


Start Kafka by Confluent & Spark - MongoDB & other databases

  1. Start Docker-Compose
# Start Confluent services
$ docker-compose -f docker-compose-confluent-kafka.yml up -d
# Start Spark services
$ docker-compose -f docker-compose-cluster-spark.yml up -d
# Start MongoDB replicas
$ docker-compose -f docker-compose-mongodb.yml up -d
  1. Start Control Center

:9021

  • broker:29092
  • :9092
  1. Start Jupyter Notebook

:8888


Start Notebooks containers - Jupyter & Zeppelin

  1. Launch docker-compose command
$ docker-compose -f docker-compose-notebooks.yml up -d

Kafka simulation part I

Inside Jupyter environment execute:

Start Spark service

  1. Start Spark notebook on JupyterLab

<hostname_virtual_machine>.8888

  1. Check Spark Master

<hostname_virtual_machine>.8080

  1. Check Spark Worker.n

<hostname_virtual_machine>.8081

  1. Start JupyterLab notebooks

Upload notebooks

from /workspace/TFM/ directory

  1. Check notebooks for theme
  • [Porto_Seguros]_PredictionModel_Python_052021_v1_0_0.ipynb

  • [Porto_Seguros]_PredictionModel_PySpark_062021_v1_0_0.ipynb

  • spark-kafka-consumer.ipynb

  • spark-kafka-producer.ipynb


[## Control Versioning]

# - 1.1.0 > 01.07.2021 (add Zeppelin notebook, change Superset port)
#
#
#

final-project-cluster-on-docker's People

Contributors

marcusrb avatar

Stargazers

NULL avatar

Watchers

James Cloos avatar  avatar

Forkers

adrianh31

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.