In this sample application, you will create a basic Java web application using Spring. This provides a starting point for creating Java microservice applications running on Spring. It contains no default application code, but comes with standard best practices, including a health check and application metric monitoring.
Capabilities are provided through dependencies in the pom.xml
file. The ports are set to the defaults of 8080
for http and 8443
for https and are exposed to the CLI in the cli-config.yml
file. The ports are set in the pom.xml
file and exposed to the CLI in the cli-config.yml
file.
You can deploy this application to IBM Cloud or build it locally by cloning this repo first. Once your app is live, you can access the /health
endpoint to build out your cloud native application.
After you have created a new git repo from this git template, remember to rename the project.
Edit package.json
and change the default name to the name you used to create the template.
Make sure you are logged into the IBM Cloud using the IBM Cloud CLI and have access to you development cluster. If you are using OpenShift make sure you have logged into OpenShift CLI on the command line.
npm i -g @garage-catalyst/ibm-garage-cloud-cli
Use the IBM Garage for Cloud CLI to register the GIT Repo with Jenkins
igc pipeline -n dev
To get started building this application locally, you can either run the application natively or use the IBM Cloud Developer Tools for containerization and easy deployment to IBM Cloud.
- Maven
- Java 11: Any compliant JVM should work.
- Java 11 JDK from Oracle or Download a Liberty server package that contains the IBM JDK (Windows, Linux)
To build and run an application:
./gradlew build
./gradlew bootRun
For more details on how to use this Starter Kit Template please review the IBM Garage for Cloud Developer Tools Developer Guide
- Learn more about augmenting your Java applications on IBM Cloud with the Java Programming Guide.
- Explore other sample applications on IBM Cloud.
This sample application is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.
Finally, the template components can be periodically updated by running the following:
./update-template.sh
Intall Kafka
Start Zookeeper
./bin/zookeeper-server-start.sh config/zookeeper.properties
Start Kafka server/broker
./bin/kafka-server-start.sh config/server.properties
Describe Kafka topics
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe
List Kafka topics
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
Delete Kafka topic
./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic demo
This app consists of a simple kafka producer rest endpoint that can write a message to a kafka topic and a simple consumer rest endpoint that reads a message from the topic. See the code in java package 'com.ibm.simplekafka'.
Create a Kafka topic called demo with 1 partition and replication factor of 1
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic demo
This app consists of a a producer rest endpoint that sends an order to a kafka topic.
There is an listener agent that listens for any new messages/orders on the topic and handles them.
Right now, the handler consists of logging the order message.
For the orders demo to work, the following topic needs to be created:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic orders
This app consists of a producer rest endpoint that sends a string of words to an input topic. There is a listener agent running the word count stream application that listens for new messages on the input topic and counts the occurance of individual words and puts the result on the output topic.
For the word count stream app to work, the following topics needs to be created:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-wordcount-plaintext-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-wordcount-output
To monitor the output, you need to run the following command in a terminal window:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
--topic streams-wordcount-output \
--from-beginning \
--formatter kafka.tools.DefaultMessageFormatter \
--property print.key=true \
--property print.value=true \
--property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
--property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
The pipe stream app basically takes the value on the input topic and puts it on the output topic. For the pipe stream app to work, the following topics needs to be created:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-pipe-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-pipe-output
Start a kafka producer and consumer as follows:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-pipe-input
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic streams-pipe-output --from-beginning
The mapping stream app reverses the key value pair so that on record on the output topic has the value as the key and the key as the value.
For the mapping stream app to work, the following topics needs to be created:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic mapping-stream-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic mapping-stream-output
Start a kafka producer and consumer as follows:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mapping-stream-input --property "parse.key=true" --property "key.separator=:"
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mapping-stream-output --from-beginning --formatter kafka.tools.DefaultMessageFormatter --property print.key=true --property print.value=true --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer --property value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
Counts the number of times the input record 'key' occurs in the specified kafka window duration
Topics:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic window-stream-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic window-stream-output
Producer and Consumer
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic window-stream-input --property "parse.key=true" --property "key.separator=:"
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic window-stream-output --from-beginning --formatter kafka.tools.DefaultMessageFormatter --property print.key=true --property print.value=true --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer --property value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
Topics:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic json-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic string-ouput
To observe the input and output topics:
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic json-input --from-beginning
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic string-output --from-beginning
Uses a SessionWindow to capture the events/msgs in a transaction. SessionWindow has a timeout to timeout a transaction/session.
Topics:
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sessionwindow-stream-input
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic sessionwindow-stream-output
Producer and Consumer
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic window-stream-input --property "parse.key=true" --property "key.separator=:"
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic window-stream-output --from-beginning --formatter kafka.tools.DefaultMessageFormatter --property print.key=true --property print.value=true --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer --property value.deserializer=org.apache.kafka.common.serialization.StringDeserializer