Giter VIP home page Giter VIP logo

zilla-examples's People

Contributors

adaemonthread avatar akrambek avatar attilakreiner avatar bmaidics avatar dependabot[bot] avatar jfallows avatar llukyanov avatar lukefallows avatar vordimous avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

zilla-examples's Issues

The http.filesystem example hangs when returning some files

Steps to reproduce:

  1. Change directly into the http.filesystem example directory
  2. Change the line password: ${{env.KEYSTORE_PASSWORD}} in zilla.yaml to password: generated.
  3. Download the attached jquery.slim.min.js.zip and unzip it into the www directory
  4. Run zilla using the following docker command:
docker run \
  -p 8080:8080 \
  -p 9090:9090 \
  -v `pwd`/zilla.yaml:/etc/zilla/zilla.yaml \
  -v `pwd`/tls/localhost.p12:/etc/zilla/tls/localhost.p12 \
  -v `pwd`/www:/var/www ghcr.io/aklivity/zilla:0.9.57 \
  start -v
  1. In another terminal window run curl http://localhost:8080/jquery.slim.min.js -v
  2. Observe that the curl command doesn't return as expected
    jquery.slim.min.js.zip

Migrate examples to use Kubernetes instead of Docker

Start by migrating the following examples to illustrate how to use Zilla with Kubernetes.

  • tcp.echo requires configuration only, via configmap
  • http.echo requires tls key and certificate, via secret
  • http.kafka.oneway requires multiple containers

Then with these complete, apply a consistent approach to the remaining examples too.

Add more DevEx features to zilla-examples

enhancements to make:

  • shorter path to external Kafka usage
  • downloadable artifacts per folder
  • wget/curl entry points to run examples
  • docker compose examples

housekeeping:

  • update to use new port 7114
  • normalize binding names

Tested

  • amqp.reflect/
    • error on main #84
  • grpc.echo/
  • grpc.kafka.echo/
  • grpc.kafka.fanout/
    • error on main #83
  • grpc.kafka.proxy/
    • error on main #85
  • grpc.proxy/
  • http.echo.jwt/
    • error on main #91
  • http.echo/
  • http.filesystem.config.server/
  • http.filesystem/
    • error on main #90
  • http.kafka.async/
  • http.kafka.cache/
  • http.kafka.crud/
  • http.kafka.oneway/
  • http.kafka.sasl.scram/
  • http.kafka.sync/
  • http.proxy/
  • http.redpanda.sasl.scram/
  • kafka.broker/
  • kubernetes.prometheus.autoscale/
  • mqtt.kafka.broker.jwt/
    • error on main #91
  • mqtt.kafka.broker/
  • mqtt.proxy.asyncapi/
  • quickstart/
  • sse.kafka.fanout/
  • sse.proxy.jwt/
    • error on main #92
    • error on main #91
  • tcp.echo/
  • tcp.reflect/
  • tls.echo/
  • tls.reflect/
  • ws.echo/
  • ws.reflect/

amqp.reflect example times out installing helm in startup.sh

runnign the ./setup.sh attempts to start zilla but times out

+ ZILLA_CHART=oci://ghcr.io/aklivity/charts/zilla
+ helm install zilla-amqp-reflect oci://ghcr.io/aklivity/charts/zilla --namespace zilla-amqp-reflect --create-namespace --wait --values values.yaml --set-file 'zilla\.yaml=zilla.yaml' --set-file 'secrets.tls.data.localhost\.p12=tls/localhost.p12'

Error: INSTALLATION FAILED: timed out waiting for the condition

pod inspect json:

{
	"Id": "adaf36276a7d91369a3eb2c8af9f5ed41d14bfbdd049adf82cffb23603c1930d",
	"Created": "2023-10-23T18:58:17.807321054Z",
	"Path": "/pause",
	"Args": [],
	"State": {
		"Status": "running",
		"Running": true,
		"Paused": false,
		"Restarting": false,
		"OOMKilled": false,
		"Dead": false,
		"Pid": 70877,
		"ExitCode": 0,
		"Error": "",
		"StartedAt": "2023-10-23T18:58:17.955757929Z",
		"FinishedAt": "0001-01-01T00:00:00Z"
	},
	"Image": "sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	"ResolvConfPath": "/var/lib/docker/containers/adaf36276a7d91369a3eb2c8af9f5ed41d14bfbdd049adf82cffb23603c1930d/resolv.conf",
	"HostnamePath": "/var/lib/docker/containers/adaf36276a7d91369a3eb2c8af9f5ed41d14bfbdd049adf82cffb23603c1930d/hostname",
	"HostsPath": "/var/lib/docker/containers/adaf36276a7d91369a3eb2c8af9f5ed41d14bfbdd049adf82cffb23603c1930d/hosts",
	"LogPath": "/var/lib/docker/containers/adaf36276a7d91369a3eb2c8af9f5ed41d14bfbdd049adf82cffb23603c1930d/adaf36276a7d91369a3eb2c8af9f5ed41d14bfbdd049adf82cffb23603c1930d-json.log",
	"Name": "/k8s_POD_zilla-amqp-reflect-5455df76dd-dftbw_zilla-amqp-reflect_658af0b3-c7dd-48c7-bbf9-032f795f9512_0",
	"RestartCount": 0,
	"Driver": "overlay2",
	"Platform": "linux",
	"MountLabel": "",
	"ProcessLabel": "",
	"AppArmorProfile": "",
	"ExecIDs": null,
	"HostConfig": {
		"Binds": null,
		"ContainerIDFile": "",
		"LogConfig": {
			"Type": "json-file",
			"Config": {}
		},
		"NetworkMode": "none",
		"PortBindings": {},
		"RestartPolicy": {
			"Name": "",
			"MaximumRetryCount": 0
		},
		"AutoRemove": false,
		"VolumeDriver": "",
		"VolumesFrom": null,
		"ConsoleSize": [
			0,
			0
		],
		"CapAdd": null,
		"CapDrop": null,
		"CgroupnsMode": "private",
		"Dns": null,
		"DnsOptions": null,
		"DnsSearch": null,
		"ExtraHosts": null,
		"GroupAdd": null,
		"IpcMode": "shareable",
		"Cgroup": "",
		"Links": null,
		"OomScoreAdj": -998,
		"PidMode": "",
		"Privileged": false,
		"PublishAllPorts": false,
		"ReadonlyRootfs": false,
		"SecurityOpt": [
			"no-new-privileges"
		],
		"UTSMode": "",
		"UsernsMode": "",
		"ShmSize": 67108864,
		"Runtime": "runc",
		"Isolation": "",
		"CpuShares": 2,
		"Memory": 0,
		"NanoCpus": 0,
		"CgroupParent": "/kubepods/kubepods/besteffort/pod658af0b3-c7dd-48c7-bbf9-032f795f9512",
		"BlkioWeight": 0,
		"BlkioWeightDevice": null,
		"BlkioDeviceReadBps": null,
		"BlkioDeviceWriteBps": null,
		"BlkioDeviceReadIOps": null,
		"BlkioDeviceWriteIOps": null,
		"CpuPeriod": 0,
		"CpuQuota": 0,
		"CpuRealtimePeriod": 0,
		"CpuRealtimeRuntime": 0,
		"CpusetCpus": "",
		"CpusetMems": "",
		"Devices": null,
		"DeviceCgroupRules": null,
		"DeviceRequests": null,
		"MemoryReservation": 0,
		"MemorySwap": 0,
		"MemorySwappiness": null,
		"OomKillDisable": null,
		"PidsLimit": null,
		"Ulimits": null,
		"CpuCount": 0,
		"CpuPercent": 0,
		"IOMaximumIOps": 0,
		"IOMaximumBandwidth": 0,
		"MaskedPaths": [
			"/proc/asound",
			"/proc/acpi",
			"/proc/kcore",
			"/proc/keys",
			"/proc/latency_stats",
			"/proc/timer_list",
			"/proc/timer_stats",
			"/proc/sched_debug",
			"/proc/scsi",
			"/sys/firmware"
		],
		"ReadonlyPaths": [
			"/proc/bus",
			"/proc/fs",
			"/proc/irq",
			"/proc/sys",
			"/proc/sysrq-trigger"
		]
	},
	"GraphDriver": {
		"Data": {
			"LowerDir": "/var/lib/docker/overlay2/24465f7263067b618ba70ac91899daee074c664dafcd170e8d68841275b4ab7a-init/diff:/var/lib/docker/overlay2/7d680b5228e9bd77743e1313b4067c3610514ca126a525d901d41d9c509adee2/diff",
			"MergedDir": "/var/lib/docker/overlay2/24465f7263067b618ba70ac91899daee074c664dafcd170e8d68841275b4ab7a/merged",
			"UpperDir": "/var/lib/docker/overlay2/24465f7263067b618ba70ac91899daee074c664dafcd170e8d68841275b4ab7a/diff",
			"WorkDir": "/var/lib/docker/overlay2/24465f7263067b618ba70ac91899daee074c664dafcd170e8d68841275b4ab7a/work"
		},
		"Name": "overlay2"
	},
	"Mounts": [],
	"Config": {
		"Hostname": "zilla-amqp-reflect-5455df76dd-dftbw",
		"Domainname": "",
		"User": "65535:65535",
		"AttachStdin": false,
		"AttachStdout": false,
		"AttachStderr": false,
		"Tty": false,
		"OpenStdin": false,
		"StdinOnce": false,
		"Env": [
			"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
		],
		"Cmd": null,
		"Image": "registry.k8s.io/pause:3.9",
		"Volumes": null,
		"WorkingDir": "/",
		"Entrypoint": [
			"/pause"
		],
		"OnBuild": null,
		"Labels": {
			"annotation.kubernetes.io/config.seen": "2023-10-23T18:58:17.494508054Z",
			"annotation.kubernetes.io/config.source": "api",
			"app.kubernetes.io/instance": "zilla-amqp-reflect",
			"app.kubernetes.io/name": "zilla",
			"io.kubernetes.container.name": "POD",
			"io.kubernetes.docker.type": "podsandbox",
			"io.kubernetes.pod.name": "zilla-amqp-reflect-5455df76dd-dftbw",
			"io.kubernetes.pod.namespace": "zilla-amqp-reflect",
			"io.kubernetes.pod.uid": "658af0b3-c7dd-48c7-bbf9-032f795f9512",
			"pod-template-hash": "5455df76dd"
		}
	},
	"NetworkSettings": {
		"Bridge": "",
		"SandboxID": "90ddbef7ef9eecd850ce60b1828584e16392a55e21c6182a2da9654e7505b94d",
		"HairpinMode": false,
		"LinkLocalIPv6Address": "",
		"LinkLocalIPv6PrefixLen": 0,
		"Ports": {},
		"SandboxKey": "/var/run/docker/netns/90ddbef7ef9e",
		"SecondaryIPAddresses": null,
		"SecondaryIPv6Addresses": null,
		"EndpointID": "",
		"Gateway": "",
		"GlobalIPv6Address": "",
		"GlobalIPv6PrefixLen": 0,
		"IPAddress": "",
		"IPPrefixLen": 0,
		"IPv6Gateway": "",
		"MacAddress": "",
		"Networks": {
			"none": {
				"IPAMConfig": null,
				"Links": null,
				"Aliases": null,
				"NetworkID": "b38041963a6b7bcf7bc62ca1e80226c7f33f5937495583a4bb480b1c4a6a23a2",
				"EndpointID": "64e0dd9c54f4c633b4e467f71fa03a43396307109d2e83976df483e5fbef286b",
				"Gateway": "",
				"IPAddress": "",
				"IPPrefixLen": 0,
				"IPv6Gateway": "",
				"GlobalIPv6Address": "",
				"GlobalIPv6PrefixLen": 0,
				"MacAddress": "",
				"DriverOpts": null
			}
		}
	}
}

Unexpected behavior in the sse.proxy.jwt example

The behavior described in the README of the sse.proxy.jwt example is erratic. The jwt token's duration is 30s. The listening curl command will not display the expected data:Hello, world text, but drops the connection for the first 20 seconds. After the event:challenge is sent, it all starts working fine for the rest of the remaining time (10s).

Steps to reproduce:

$ docker build -t zilla-examples/sse-server:latest .
$ docker stack deploy -c stack.yml example --resolve-image never
$ export JWT_TOKEN=$(jwt encode \
    --alg "RS256" \
    --kid "example" \
    --iss "https://auth.example.com" \
    --aud "https://api.example.com" \
    --sub "example" \
    --exp 30s \
    --no-iat \
    --payload "scope=proxy:stream" \
    --secret @private.pem)

# in shell 1:
$ curl -v --cacert test-ca.crt "https://localhost:9090/events?access_token=${JWT_TOKEN}"
# in shell 2:
$ echo '{ "data": "Hello, world '`date`'" }' | nc -c localhost 7001
# in shell 1 observe:
* Connection #0 to host localhost left intact # curl exits

# in shell 1:
$ curl -v --cacert test-ca.crt "https://localhost:9090/events?access_token=${JWT_TOKEN}"
# in shell 2:
$ echo '{ "data": "Hello, world '`date`'" }' | nc -c localhost 7001
# in shell 1 observe:
* Connection #0 to host localhost left intact # curl exits

# in shell 1:
$ curl -v --cacert test-ca.crt "https://localhost:9090/events?access_token=${JWT_TOKEN}"
# wait for the rest of the 20s...
event:challenge
data:{"method":"POST","headers":{"content-type":"application/x-challenge-response"}}

# things are starting to work OK from this point on until the token expires...
# in shell 2:
$ echo '{ "data": "Hello, world '`date`'" }' | nc -c localhost 7001
# in shell 1 observe:
data:Hello, world Wed Jan 25 10:39:42 CET 2023
# in shell 2:
$ echo '{ "data": "Hello, world '`date`'" }' | nc -c localhost 7001
# in shell 1 observe:
data:Hello, world Wed Jan 25 10:39:42 CET 2023

# wait for the rest of the 10s for the token to expire
* Connection #0 to host localhost left intact # curl exits

The erratic behavior happens consistently before the event:challenge. After that it all works all fine until the token expires.

`grpc.proxy` example

bindings
tcp -> tls -> http -> grpc -> grpc -> http -> tls -> tcp

Note: tls client optional when connecting zilla to grpc server.

Grpc echo server docker image for Arm based chip

Goal

We want to set up the following Kubernetes example zilla grpc proxy -> Grpc Echo server for the grpc.proxy example.

Problem
I have created https://github.com/akrambek/grpc-echo repo for grpc echo server which is based on Spring Boot. However, the problem I faced is to build the image for arm based chip since protobuf-maven-plugin doesn't have arm based executable yet. To get the idea I have pushed the to this branch https://github.com/akrambek/zilla-examples/tree/feature/grpc-kafka/grpc.proxy

FROM  maven:3.9.0-eclipse-temurin-17-alpine

RUN apk upgrade --no-cache && \
    apk add --no-cache libgcc libstdc++ ncurses-libs && \
    apk add --no-cache libgcc nodejs git

RUN wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
    wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.35-r0/glibc-2.35-r0.apk

RUN apk add --force-overwrite glibc-2.35-r0.apk

WORKDIR /usr/app

RUN git clone https://github.com/akrambek/grpc-echo.git
RUN cd grpc-echo && mvn install

ENTRYPOINT ["java", "-jar", "grpc-echo/target/echo-develop-SNAPSHOT.jar"]
#10 33.54 [INFO] Compiling 1 proto file(s) to /usr/app/grpc-echo/target/generated-sources/protobuf/java
#10 33.66 [ERROR] PROTOC FAILED: qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory
#10 33.66
#10 33.66 [ERROR] /usr/app/grpc-echo/src/main/resources/proto/echo.proto [0:0]: qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory
`maven:3.9.0-eclipse-temurin-17-alpine

Alternative

Perhaps, we can find an alternative solution, for example, using the go I found a couple of them but I didn't have enough time to try them out and the ones I found were not maintained for a long time.

Kafka install timesout in the grpc.kafka.proxy example

./setup.sh 
+ helm install zilla-grpc-kafka-proxy-kafka chart --namespace zilla-grpc-kafka-proxy --create-namespace --wait --timeout 2m
Error: INSTALLATION FAILED: timed out waiting for the condition

Kakfa looks to have started correctly so I manually finish the ./startup.sh commands which succeed. However the first grpcurl command just hangs.

I attempted to list the kafka topics:

kcat -b localhost:9092 -L
%3|1698090965.456|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv6#[::1]:9092 failed: Connection refused (after 1ms in state CONNECT)
%3|1698090966.459|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv6#[::1]:9092 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
%3|1698090967.464|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT)
%3|1698090968.464|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv6#[::1]:9092 failed: Connection refused (after 0ms in state CONNECT)
%3|1698090969.468|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Connect to ipv4#127.0.0.1:9092 failed: Connection refused (after 0ms in state CONNECT)
% ERROR: Failed to acquire metadata: Local: Broker transport failure (Are the brokers reachable? Also try increasing the metadata timeout with -m <timeout>?)

`grpc.kafka.proxy` example

bindings
tcp -> tls -> http -> grpc -> grpc-kafka -> kafka -> tls -> tcp

tcp <- tls <- kafka <- kafka-grpc -> grpc -> http -> tls -> tcp

Note: tls client optional when connecting zilla to grpc server.

`grpc.kafka.echo` example

bindings
tcp -> tls -> http -> grpc -> grpc-kafka -> kafka -> tls -> tcp

Note: uses same kafka topic for grpc requests and responses.

`grpc.kafka.fanout` example

bindings
tcp -> tls -> http -> grpc -> grpc-kafka -> kafka -> tls -> tcp

Note: shows grpc server streaming from kafka topic, with reliable delivery.

mqtt and consumer groups

I'm running the mqtt.kafka.broker example without modification except for the chart version number is 0.9.54
Each time I publish a message with mosquito:
mosquitto_pub -V 'mqttv5' --topic 'zilla' --message 'Hello, world' --debug --insecure
it works but creates a new consumer group. Is this normal?

Error using verify steps in grpc.kafka.fanout

Following the steps in the Unreliable server streaming section I get a panic error from grpcurl(version 1.8.8)

echo 'message: "test"' | protoc --encode=example.FanoutMessage proto/fanout.proto > binary.data

kcat -P -b localhost:9092 -t messages -k -e ./binary.data                      

 grpcurl -insecure -proto proto/fanout.proto -d '' localhost:9090 example.FanoutService.FanoutServerStreamproto/fanout.proto -d '' localhost:9090 example.FanoutService.FanoutServerStream
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x28 pc=0x104f5d4e8]

goroutine 1 [running]:
github.com/jhump/protoreflect/desc/protoparse.parseToProtoRecursive({0x1053f2c80, 0x1400000f308}, {0x1400005d920, 0x1b}, 0x140001e7608?, 0x104ef4bf4?, 0x0?)
        github.com/jhump/[email protected]/desc/protoparse/parser.go:389 +0x148
github.com/jhump/protoreflect/desc/protoparse.parseToProtoRecursive.func1(0x140002eb380, 0x140000bfa90, 0x140000bf860, {0x1053f2c80, 0x1400000f308}, 0x1053a6560?, 0x140000b3f01?)
        github.com/jhump/[email protected]/desc/protoparse/parser.go:401 +0x164
github.com/jhump/protoreflect/desc/protoparse.parseToProtoRecursive({0x1053f2c80, 0x1400000f308}, {0x16b77f749, 0x12}, 0x140002e9480?, 0x0?, 0x140001e77b8?)
        github.com/jhump/[email protected]/desc/protoparse/parser.go:402 +0x1d8
github.com/jhump/protoreflect/desc/protoparse.parseToProtosRecursive({0x1053f2c80, 0x1400000f308}, {0x14000288da0, 0x1, 0x0?}, 0x0?, 0x0?)
        github.com/jhump/[email protected]/desc/protoparse/parser.go:365 +0x80
github.com/jhump/protoreflect/desc/protoparse.Parser.ParseFiles({{0x0, 0x0, 0x0}, 0x1, 0x0, 0x0, 0x0, 0x1, 0x0, 0x0, ...}, ...)
        github.com/jhump/[email protected]/desc/protoparse/parser.go:153 +0x210
github.com/fullstorydev/grpcurl.DescriptorSourceFromProtoFiles({0x0, 0x0, 0x0}, {0x14000288da0?, 0x140001e7b18?, 0x10468f018?})
        github.com/fullstorydev/grpcurl/desc_source.go:71 +0xbc
main.main()
        github.com/fullstorydev/grpcurl/cmd/grpcurl/grpcurl.go:501 +0xca8
adanelz@Andrews-MBP-2 grpc.kafka.fanout % 

*.jwt examples | thread 'main' panicked at 'Invalid JSON provided!'

Using the jwt encode command throws an error

affects http.echo.jwt, sse.proxy.jwt, and mqtt.kafka.broker.jwt

 http.echo.jwt % jwt encode \
    --alg "RS256" \
    --kid "example" \
    --iss "https://auth.example.com" \
    --aud "https://api.example.com" \
    --exp 1968862747 \
    --no-iat \
    --secret @private.pem
thread 'main' panicked at 'Invalid JSON provided!', src/translators/encode.rs:122:18
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

sse.proxy.jwt error in zilla-examples/sse-server

The error is in the https://github.com/jfallows/sse-server code

zilla-examples/sse-server:latest logs

2023-10-26 12:58:17 internal/process/esm_loader.js:74
2023-10-26 12:58:17     internalBinding('errors').triggerUncaughtException(
2023-10-26 12:58:17                               ^
2023-10-26 12:58:17 
2023-10-26 12:58:17 TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string or an instance of Buffer or URL. Received undefined
2023-10-26 12:58:17     at Object.openSync (fs.js:454:10)
2023-10-26 12:58:17     at Object.readFileSync (fs.js:364:35)
2023-10-26 12:58:17     at SSEServer.createServer (file:///usr/app/node_modules/sse-server/index.js:36:15)
2023-10-26 12:58:17     at file:///usr/app/node_modules/sse-server/bin/cli.js:32:28
2023-10-26 12:58:17     at ModuleJob.run (internal/modules/esm/module_job.js:145:37)
2023-10-26 12:58:17     at async Loader.import (internal/modules/esm/loader.js:182:24)
2023-10-26 12:58:17     at async Object.loadESM (internal/process/esm_loader.js:68:5) {
2023-10-26 12:58:17   code: 'ERR_INVALID_ARG_TYPE'
2023-10-26 12:58:17 }

The http.filesystem example fails for https requests

Steps to reproduce:

  1. Change directly into the http.filesystem example director
  2. Run the ./setup command
  3. Run curl --cacert test-ca.crt https://localhost:9090/index.html

Expected behavior:
Curl is able to fetch the the index.html file.

Observed behavior:
Curl exits with the following output:
curl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)

Unexpected behavior in the http.proxy example

The behavior described in the README of the http.proxy example is flaky. The nghttp command sometimes works fine, sometimes shows an erratic behavior.

Steps to reproduce:

$ docker stack deploy -c stack.yml example --resolve-image never
$ nghttp -ansy https://localhost:9090/demo.html # run this multiple times

Expected output:

***** Statistics *****

Request timing:
  responseEnd: the  time  when  last  byte of  response  was  received
               relative to connectEnd
 requestStart: the time  just before  first byte  of request  was sent
               relative  to connectEnd.   If  '*' is  shown, this  was
               pushed by server.
      process: responseEnd - requestStart
         code: HTTP status code
         size: number  of  bytes  received as  response  body  without
               inflation.
          URI: request URI

see http://www.w3.org/TR/resource-timing/#processing-model

sorted by 'complete'

id  responseEnd requestStart  process code size request path
 13    +46.71ms        +32us  46.68ms  200  320 /demo.html
  2    +47.66ms *   +38.54ms   9.12ms  200   89 /style.css

Erratic output (takes a few minutes to appear):

Some requests were not processed. total=2, processed=1
***** Statistics *****

Request timing:
  responseEnd: the  time  when  last  byte of  response  was  received
               relative to connectEnd
 requestStart: the time  just before  first byte  of request  was sent
               relative  to connectEnd.   If  '*' is  shown, this  was
               pushed by server.
      process: responseEnd - requestStart
         code: HTTP status code
         size: number  of  bytes  received as  response  body  without
               inflation.
          URI: request URI

see http://www.w3.org/TR/resource-timing/#processing-model

sorted by 'complete'

id  responseEnd requestStart  process code size request path
 13    +14.99ms        +29us  14.96ms  200  320 /demo.html

Pls note the differences; the first line:
Some requests were not processed. total=2, processed=1
and the missing last line:
2 +47.66ms * +38.54ms 9.12ms 200 89 /style.css

If I run the nghttp command multiple times I observe the expected and the erratic behavior ca. 50-50% of the times.

Bring your own Kafka example

I've got the mqtt.kafka.broker example working as-is but I'm have difficulty figuring out exactly how to use my existing Kafka broker. In the zilla.yaml file I've tried replacing the host and port under tcp_client0 but that didn't work. Is there an example that shows how to do this? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.