Giter VIP home page Giter VIP logo

datapull's Introduction

DataPull

DataPull is a self-service Distributed ETL tool to join and transform data from heterogeneous datastores. It provides users an easy and consistent way to move data from one datastore to another. Supported datastores include, but are not limited to, SQLServer, MySql, Postgres, Cassandra, MongoDB, and Kafka.

Features

  1. JSON configuration-driven data movement - no Java/Scala knowledge needed
  2. Join and transform data among heterogeneous datastores (including NoSQL datastores) using ANSI SQL
  3. Deploys on Amazon AWS EMR and Fargate; but can run on any Spark cluster
  4. Picks up datastore credentials stored in Hashicorp Vault, Amazon Secrets Manager
  5. Execution logs and migration history configurable to Amazon AWS Cloudwatch, S3
  6. Use built-in cron scheduler, or call REST API from external schedulers

... and many more features documented here

Run DataPull locally

Note: DataPull consists of two services, an API written in Java Spring Boot, and a Spark app written in Scala. Although Scala apps can run on JDK 11, per official docs it is recommended that Java 8 be used for compiling Scala code. The effort to upgrade to OpenJDK 11+ is tracked here

Build and execute within a Dockerised Spark environment

Pre-requisite: Docker Desktop

  • Clone this repo locally and check out the master branch
    git clone [email protected]:homeaway/datapull.git
    
  • build the Scala JAR from within the core folder
    cd datapull/core
    make build
    
  • Execute a sample JSON input file Input_Sample_filesystem-to-filesystem.json that moves data from a CSV file HelloWorld.csv to a folder of json files named SampleData_Json.
    docker run -v $(pwd):/core -w /core -it --rm gettyimages/spark:2.2.1-hadoop-2.8 spark-submit --deploy-mode client --class core.DataPull target/DataMigrationFramework-1.0-SNAPSHOT-jar-with-dependencies.jar src/main/resources/Samples/Input_Sample_filesystem-to-filesystem.json local
    
  • Open the relative path target/classes/SampleData_Json to find the result of the DataPull i.e. the data from target/classes/SampleData/HelloWorld.csv transformed into JSON.

Build and debug within an IDE (IntelliJ)

Pre-requisite: IntelliJ with Scala plugin configured. Check out this Help page if this plugin is not installed.

  • Clone this repo locally and check out the master branch
  • Open the folder core in IntelliJ IDE.
  • When prompted, add this project as a maven project.
  • By default, this source code is designed to execute a sample JSON input file Input_Sample_filesystem-to-filesystem.json that moves data from a CSV file HelloWorld.csv to a folder of json files named SampleData_Json.
  • Go to File > Project Structure... , and choose 1.8 (java version) as the Project SDK
  • Go to Run > Edit Configurations... , and do the following
    • Create an Application configuration (use the + sign on the top left corner of the modal window)
    • Set the Name to Debug
    • Set the Main Class as Core.DataPull
    • Use classpath of module Core.DataPull
    • Set JRE to 1.8
    • Click Apply and then OK
  • Click Run > Debug 'Debug' to start the debug execution
  • Open the relative path target/classes/SampleData_Json to find the result of the DataPull i.e. the data from target/classes/SampleData/HelloWorld.csv transformed into JSON.

Deploy DataPull to Amazon AWS

Deploying DataPull to Amazon AWS, involves

  • installing the DataPull API and Spark JAR in AWS Fargate, using this runbook
  • running DataPulls in AWS EMR, using this runbook

Contribute to this project

Bugs/Feature Requests

Please create an issue in this git repo, using the bug report or feature request templates.

Documentation

DataPull documentation is available at https://homeaway.github.io/datapull/ . To update this documentation, please do the following steps...

  • Create a Feature Request issue
    • Please fill in the title and the body of the issue. Our suggested title is "Documentation for <what this documentation is for>"
  • Fork the DataPull repo
  • Install MkDocs and Material for MkDocs
  • Clone your forked repo locally, and run mkdocs serve in Terminal from the docs folder of the repo
  • Open http://127.0.0.1/8000 to see a preview of the documentation site. You can edit the documentation by following https://www.mkdocs.org/#getting-started
  • Once you're done updating the documentation, please commit and push your local master branch to your fork. Also, run mkdocs gh-deploy at the terminal to update and push your gh-pages branch.
  • Create 2 PRs (one for master branch, one for gh-pages branch) and we'll review and approve them.
  • Thanks again, for helping make DataPull better!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.