Giter VIP home page Giter VIP logo

spring-boot-kafka-api's Introduction

IBM Cloud

IBM Cloud platform Apache 2

Java Spring microservice

In this sample application, you will create a basic Java web application using Spring. This provides a starting point for creating Java microservice applications running on Spring. It contains no default application code, but comes with standard best practices, including a health check and application metric monitoring.

Capabilities are provided through dependencies in the pom.xml file. The ports are set to the defaults of 8080 for http and 8443 for https and are exposed to the CLI in the cli-config.yml file. The ports are set in the pom.xml file and exposed to the CLI in the cli-config.yml file.

Steps

You can deploy this application to IBM Cloud or build it locally by cloning this repo first. Once your app is live, you can access the /health endpoint to build out your cloud native application.

Deploying

After you have created a new git repo from this git template, remember to rename the project. Edit package.json and change the default name to the name you used to create the template.

Make sure you are logged into the IBM Cloud using the IBM Cloud CLI and have access to you development cluster. If you are using OpenShift make sure you have logged into OpenShift CLI on the command line.

npm i -g @garage-catalyst/ibm-garage-cloud-cli

Use the IBM Garage for Cloud CLI to register the GIT Repo with Jenkins

igc pipeline -n dev

Building Locally

To get started building this application locally, you can either run the application natively or use the IBM Cloud Developer Tools for containerization and easy deployment to IBM Cloud.

Native Application Development

To build and run an application:

  1. ./gradlew build
  2. ./gradlew bootRun

More Details

For more details on how to use this Starter Kit Template please review the IBM Garage for Cloud Developer Tools Developer Guide

Next Steps

License

This sample application is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.

Apache License FAQ

Periodically update from the template

Finally, the template components can be periodically updated by running the following:

./update-template.sh

Kafka setup on local laptop

Intall Kafka

Start Zookeeper

./bin/zookeeper-server-start.sh config/zookeeper.properties

Start Kafka server/broker

./bin/kafka-server-start.sh config/server.properties

Useful Kafka commands

Describe Kafka topics

./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe

List Kafka topics

./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list

Delete Kafka topic

./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic demo

Simple Kafka example

This app consists of a simple kafka producer rest endpoint that can write a message to a kafka topic and a simple consumer rest endpoint that reads a message from the topic. See the code in java package 'com.ibm.simplekafka'.

Create a Kafka topic called demo with 1 partition and replication factor of 1

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic demo

Orders Kafka example

This app consists of a a producer rest endpoint that sends an order to a kafka topic.
There is an listener agent that listens for any new messages/orders on the topic and handles them. Right now, the handler consists of logging the order message.

For the orders demo to work, the following topic needs to be created:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic orders

Word Count Streaming Kafka example

This app consists of a producer rest endpoint that sends a string of words to an input topic. There is a listener agent running the word count stream application that listens for new messages on the input topic and counts the occurance of individual words and puts the result on the output topic.

For the word count stream app to work, the following topics needs to be created:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-wordcount-plaintext-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-wordcount-output

To monitor the output, you need to run the following command in a terminal window:

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
    --topic streams-wordcount-output \
    --from-beginning \
    --formatter kafka.tools.DefaultMessageFormatter \
    --property print.key=true \
    --property print.value=true \
    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer

Pipe Streaming Kafka example

The pipe stream app basically takes the value on the input topic and puts it on the output topic. For the pipe stream app to work, the following topics needs to be created:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-pipe-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-pipe-output

Start a kafka producer and consumer as follows:

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-pipe-input
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic streams-pipe-output --from-beginning

Mapping Stream Kafka example

The mapping stream app reverses the key value pair so that on record on the output topic has the value as the key and the key as the value.

For the mapping stream app to work, the following topics needs to be created:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic mapping-stream-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic mapping-stream-output

Start a kafka producer and consumer as follows:

./bin/kafka-console-producer.sh --broker-list localhost:9092     --topic mapping-stream-input     --property "parse.key=true"     --property "key.separator=:"
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092     --topic mapping-stream-output     --from-beginning     --formatter kafka.tools.DefaultMessageFormatter     --property print.key=true     --property print.value=true     --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer     --property value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

Window count Kafka example

Counts the number of times the input record 'key' occurs in the specified kafka window duration

Topics:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic window-stream-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic window-stream-output

Producer and Consumer

./bin/kafka-console-producer.sh --broker-list localhost:9092     --topic window-stream-input     --property "parse.key=true"     --property "key.separator=:"
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092     --topic window-stream-output     --from-beginning     --formatter kafka.tools.DefaultMessageFormatter     --property print.key=true     --property print.value=true     --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer     --property value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

json serializer and deserializer example

Topics:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic json-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic string-ouput

To observe the input and output topics:

./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic json-input --from-beginning
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic string-output --from-beginning

session window stream service example

Uses a SessionWindow to capture the events/msgs in a transaction. SessionWindow has a timeout to timeout a transaction/session.

Topics:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sessionwindow-stream-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sessionwindow-stream-output

Producer and Consumer

./bin/kafka-console-producer.sh --broker-list localhost:9092     --topic window-stream-input     --property "parse.key=true"     --property "key.separator=:"
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092     --topic window-stream-output     --from-beginning     --formatter kafka.tools.DefaultMessageFormatter     --property print.key=true     --property print.value=true     --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer     --property value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

spring-boot-kafka-api's People

Contributors

gkovan avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.