Giter VIP home page Giter VIP logo

jikkou's People

Contributors

allsimon avatar bsvicente avatar chrisoberle avatar dependabot[bot] avatar fhussonnois avatar github-actions[bot] avatar gquintana avatar gschmutz avatar joschi avatar jramonrod avatar saidbouras avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jikkou's Issues

spec file format is sensitive to types

The topic config:

partitions: "1"

throws

java.lang.ClassCastException: class java.lang.String cannot be cast to class java.lang.Integer (java.lang.String and java.lang.Integer are in module java.base of loader 'bootstrap')
        at io.streamthoughts.kafka.specs.reader.TopicClusterSpecReader.to(TopicClusterSpecReader.java:45)
        at io.streamthoughts.kafka.specs.reader.TopicClusterSpecReader.to(TopicClusterSpecReader.java:33)

The solution is to remove double quotes:

partitions: 1

Both should work (duck typing).

And it's the contrary on configs

 retention.bytes: 134217728

is not supported

^[[31m^[[1mjava.lang.ClassCastException: class java.lang.Integer cannot be cast to class java.lang.String (java.lang.Integer and java.lang.String are in module java.base of loader 'bootstrap')^[[21m^[[39m^[[0m
^[[3m   at io.streamthoughts.kafka.specs.reader.TopicClusterSpecReader.lambda$to$0(TopicClusterSpecReader.java:51)^[[23m^[[0m

and should be writen as:

 retention.bytes: "134217728"

Jackson YAML dataformat may be interesting https://github.com/FasterXML/jackson-dataformats-text/tree/master/yaml
It would make YAML parsing easier.

Alter ande Delete Topics

Hello,

I have a question regarding the alter option,

I've created a topic with the following configuration:

version: 1
topics:
- configs:
    cleanup.policy: compact
    compression.type: producer
    min.insync.replicas: '1'
    retention.ms: '1000'
    max.message.bytes: '64000'
    flush.messages: '1'
  name: prueba-v2
  partitions: 10
  replication_factor: 1

when i excuted describe the configuration of that topic is shown:

../kafka/kafka/bin/./kafka-topics.sh --describe --zookeeper localhost:2181 --topic prueba-v2
Topic: prueba-v2	PartitionCount: 10	ReplicationFactor: 1	Configs: compression.type=producer,cleanup.policy=compact,max.message.bytes=64000,min.insync.replicas=1,retention.ms=1000,flush.messages=1
	Topic: prueba-v2	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 1	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 2	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 3	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 4	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 5	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 6	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 7	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 8	Leader: 0	Replicas: 0	Isr: 0
	Topic: prueba-v2	Partition: 9	Leader: 0	Replicas: 0	Isr: 0

Now I'm going to modify the max.message.bytes property and assign it the value from 64000 to 128000

version: 1
topics:
- configs:
    cleanup.policy: compact
    compression.type: producer
    min.insync.replicas: '1'
    retention.ms: '1000'
    max.message.bytes: '128000'
    flush.messages: '1'
  name: prueba-v2
  partitions: 10
  replication_factor: 1

When I execute command:

docker run -it --net host -v $(pwd)/kafka-specs.yaml:/kafka-specs.yaml streamthoughts/kafka-specs --file /kafka-specs.yaml --bootstrap-server localhost:9092  --execute --alter --entity-type topics --verbose --yes

image
It indicates to me that you have not performed any operations
ok : 0, changed : 0, failed : 0

You know what I might be missing?
It's the same with the delete

image

Thanks in advance!!

Regards!

Upload binaries in Maven Central

Building kafka-specs behind an enterprise HTTP Proxy is very complicated.

Having binaries (tar.gz, zip, fat jar) in Maven Central or any binary repository would help.

Option accepting multiple topic spec yaml files in one run

Is your feature request related to a problem? Please describe.

I'm in the process of including jikkou in a docker-compse.yml environment (generated using Platys) in a way that it just runs once at the beginning when the stack is started and whenever a docker-compose up -d is done. For that it would be nice if muliple topic-specs.yml file could be used in one run of jikkou.

Describe the solution you'd like
It would be nice if we could pass a pattern instead of a hard-coded file path. So all files matching the file pattern would be read and used for the run.

Describe alternatives you've considered
n.a.

Additional context
n.a.

Feature: apply changes following plan order

When running

  topics:
    - name: topic1
      partitions: 3
      replication_factor: 2
    - name: topic2
      partitions: 3
      replication_factor: 2

The output is

TASK [CREATE] Create a new topic topic2 (partitions=3, replicas=2) - OK ****************************
TASK [CREATE] Create a new topic topic1 (partitions=3, replicas=2) - OK ****************************

Topics are created following a random order, it makes it hard

  • to check what's being done (or about to be done when dry run mode)
  • to debug the plan content

I'd like the plan ordered to be respected

Support of type Transactional IDs in ACL resources?

We are using jikkou (0.8.0) and wanted to know how we could apply ACLs for a transactional producer using the ACL resource.

We would typically use the kafka admin tools to do this with the below:

kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
   --add --transactional-id * --allow-principal User:* --operation write

I haven't extensively tested jikkou so I cannot comment at this stage whether there are additional flags for this type of ACL, but would appreciate any light that is shed on this.

Cheers.

Protect topics from deletion

Internal topics (__consumer_offsets and the like) or legacy topics not managed by Kafka Specs shouldn't be deleted.

I'd like to have

  • a topic blacklist/whitelist command line option, an allow/deny regex for example, to select which topic can or can not be deleted (altered?).
  • something similar to Ansible limit option which select hosts subset

bootstrap-servers argument ignored

No matter how I set bootstrap-servers, the settings is ignored:

--bootstrap-servers myhost.mycompany.fr:9094
--bootstrap-servers=myhost.mycompany.fr:9094

The admin client tries to connect to localhost:

Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-05-11 16:07:56,698 INFO  [main] org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
        bootstrap.servers = [localhost:9092]

The README.md (line 57) says (singular):

--bootstrap-server localhost:9092

The CLI help says (plural and equal sign):

--bootstrap-servers=<bootstrapServer>
                  A list of host/port pairs to use for establishing the initial

The trick was that --bootstrap-servers argument is ignored when placed before topics create command.

CLI Help --file-* options are not clear

bin/kafka-specs --bootstrap-servers=$(hostname -f):9092 topics create --help
...
      --file-path=<file>    Align cluster resources with the specified
                              specifications.
      --file-url=<url>      Delete all remote entities which are not described
                              in specifications.
...

The difference between file-path and file-url is not clear to me

add feature to cleanup topics messages

During development cycle it is common to have the need to cleanup all topics after some tests.

We should consider to add a new functionality to temporarily set the topics retention to 0 in order to delete messages for each topic. The command should wait for all inputs to be empty before returning.

add support for specifying common config props shared between multipe topics

The idea is to be able to specify something like :

version: 1
metadata: {}
specs:
  config_maps:
    - name: "DefaultCleanupPolicy"
      configs:
        cleanup.policy: delete
        retention.ms: 10000

  topics:
    - name: 'my-topic-p1'
      partitions: 1
      replication_factor: 1
      config_map_refs: [DefaultCleanupPolicy]

    - name: 'my-topic-p2'
      partitions: 2
      replication_factor: 1
      config_map_refs: [DefaultCleanupPolicy]

Alter Topic always returns status Changed

Running twice the same command with same options --alter --entity-type topics --file topics.yml and same config file.
Topics are considered as changed the second time.

17:10:51     [37mTASK [ALTER] Alter topic test2 - CHANGED ***************************************************************
17:10:51     [33m{
17:10:51       "changed": true,
17:10:51       "end": 1597158651235,
17:10:51       "resource": {
17:10:51         "name": "test2",
17:10:51         "partitions": 12,
17:10:51         "replicationFactor": 1,
17:10:51         "configs": {}
17:10:51       },
17:10:51       "failed": false,
17:10:51       "status": "CHANGED"
17:10:51     }
17:10:51     [37mTASK [ALTER] Alter topic test1 - CHANGED ***************************************************************
17:10:51     [33m{
17:10:51       "changed": true,
17:10:51       "end": 1597158651235,
17:10:51       "resource": {
17:10:51         "name": "test1",
17:10:51         "partitions": 12,
17:10:51         "replicationFactor": 1,
17:10:51         "configs": {
17:10:51           "compression.type": "producer",
17:10:51           "min.insync.replicas": "1"
17:10:51         }
17:10:51       },
17:10:51       "failed": false,
17:10:51       "status": "CHANGED"
17:10:51     }
17:10:51     [37mok : 0, changed : 2, failed : 0

Feature: pattern_type should default to LITERAL

When pattern_type is not set an error java.lang.NullPointerException: 'patternType' cannot be null is raised.

First, the error message, is misleading it should say pattern_type cannot be null instead.

Then, pattern_type should default to LITERAL like in the kafka-acls command line tool.

Finally, I wonder whether a pattern begin* could be safely converted to:

pattern: begin
pattern_type: PREFIXED

This would be smart shortcut.
After all, as far as I understand Kafka ACLs, pattern * is converted to:

pattern_type: ALL

Bug: kafka-specs validate fails on ACLS

I run bin/kafka-specs validate --file-path=etc/kafka-specs.yml

java.lang.RuntimeException: Failed to serialize specification into YAML: No serializer found for class io.streamthoughts.kafka.specs.model.V1AccessPermission and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationFeature.FAIL_ON_EMPTY_BEANS) (through reference chain: io.streamthoughts.kafka.specs.model.V1SpecFile["specs"]->io.streamthoughts.kafka.specs.model.V1SpecsObject["security"]->io.streamthoughts.kafka.specs.model.V1SecurityObject["roles"]->java.util.ArrayList[0]->io.streamthoughts.kafka.specs.model.V1AccessRoleObject["permission"])
        at io.streamthoughts.kafka.specs.YAMLClusterSpecWriter.write(YAMLClusterSpecWriter.java:45)
        at io.streamthoughts.kafka.specs.command.validate.ValidateCommand.call(ValidateCommand.java:64)
        at io.streamthoughts.kafka.specs.command.validate.ValidateCommand.call(ValidateCommand.java:32)
        at picocli.CommandLine.executeUserObject(CommandLine.java:1953)
        at picocli.CommandLine.access$1300(CommandLine.java:145)
        at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2346)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2311)
        at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179)
        at picocli.CommandLine.execute(CommandLine.java:2078)
        at io.streamthoughts.kafka.specs.KafkaSpecs.main(KafkaSpecs.java:62)

Using kafka-specs 0.5.0

Feature: allow to override bootstrap.servers using env variable

Describe the solution you'd like
Currently, it's possible to set the bootstrap.servers used for configuring the admin-client through either the CLI arg or application.conf file.

By default, using the default application configuration it should be possible to override the bootstrap.server using the environment variable: JIKKOU_DEFAULT_KAFKA_BOOTSTRAP_SERVERS

Feature: add a pluggable interface to report applied changes to a third-party system (e.g. Kafka)

Describe the solution you'd like
For some scenariso, it may be useful to be notified or to notify a third-party system that a new topic is available on a Kafka Cluster (for example for data-governance). For doing that, users should be able to enable and configure a pluggable reporter.

Additionally, Jikkou should provide a built-in reporter to publish changes into a Kafka Topic using the cloud-event specification.

Docker image tagged as "latest" is not the latest

Hi

Just started using Jikkou and realized that the image tagged as latest is not the 0.8.0 version (wanted to use the environment variable JIKKOU_DEFAULT_KAFKA_BOOTSTRAP_SERVERS :-).

Think it would be good to have latest on 0.8.0.

Thanks for your work, I really just started using it but I think it is a really helpful tool!

Support user config, in particular quotas

Automate the equivalent of

bin/kafka-configs.sh  --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1

When running in Dry Run mode, don't ask to continue

When running in dry run mode:

 bin/kafka-specs --bootstrap-servers=localhost:9092  --command-config=etc/command.properties acls create --dry-run --file-path=etc/kafka-specs.yml

The output starts with:

Warning: You are about to:
 Create the ACL policies missing on the cluster as describe in the specification file.


 Are you sure you want to continue [y/n]

First, the Warning message is frightening, and then confirmation question is useless since we are running in dry run mode.

Feature: a single launch to do all

In the current release, I have to run kafka-specs several times:

  • topics create
  • topics alter
  • topics delete
  • acls create
    Each run requires to initiate a connection to Kafka, fetch the actual state and compare it to the expected state.

Being able to do everything in one go would be very interesting.
Instead of running:

kafka-specs <topics|acls> <create|alter|delete>

I would run

kafka-specs --entity-type topics,acls  --operation create,alter,delete
or even
kafka-specs --entity-type all  --operation all

Error on creating a topic (java.io.FileNotFoundException: /cluster-dev-topics.yml)

Hi,

I'm trying to create a topic through the command:

docker run -it --net host \
-v $(pwd)/kafka-specs.yaml:/kafka-specs.yaml \
streamthoughts/kafka-specs \
--file /cluster-dev-topics.yml \
--bootstrap-server localhost:9092 \
--execute --create \
--entity-type topics \
--verbose

I have raised an instance of kafka in local and if I run sh

./kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic TutorialTopic3

Created topic TutorialTopic3.

However if I run it through kafka-specs it gives me an error:

java.lang.RuntimeException: java.io.FileNotFoundException: /cluster-dev-topics.yml (No such file or directory)

If execute the command:

docker run --net host \
> streamthoughts/kafka-specs \
> --bootstrap-server localhost:9092 \
> --describe \
> --entity-type topics \
> --default-configs

It returns me the correct configuration of the kafka server

topics:
- name: TutorialTopic2
  partitions: 1
  replication_factor: 1
  configs:
    cleanup.policy: delete
    compression.type: producer
    delete.retention.ms: '86400000'
    file.delete.delay.ms: '60000'
    flush.messages: '9223372036854775807'
    flush.ms: '9223372036854775807'
    follower.replication.throttled.replicas: ''
    index.interval.bytes: '4096'
    leader.replication.throttled.replicas: ''
    max.compaction.lag.ms: '9223372036854775807'
    max.message.bytes: '1048588'
    message.downconversion.enable: 'true'
    message.format.version: 2.5-IV0
    message.timestamp.difference.max.ms: '9223372036854775807'
    message.timestamp.type: CreateTime
    min.cleanable.dirty.ratio: '0.5'
    min.compaction.lag.ms: '0'
    min.insync.replicas: '1'
    preallocate: 'false'
    retention.bytes: '-1'
    retention.ms: '604800000'
    segment.bytes: '1073741824'
    segment.index.bytes: '10485760'
    segment.jitter.ms: '0'
    segment.ms: '604800000'
    unclean.leader.election.enable: 'false'
- name: TutorialTopic1
  partitions: 1
  replication_factor: 1
  configs:
    cleanup.policy: delete
    compression.type: producer
    delete.retention.ms: '86400000'
    file.delete.delay.ms: '60000'
    flush.messages: '9223372036854775807'
    flush.ms: '9223372036854775807'
    follower.replication.throttled.replicas: ''
    index.interval.bytes: '4096'
    leader.replication.throttled.replicas: ''
    max.compaction.lag.ms: '9223372036854775807'
    max.message.bytes: '1048588'
    message.downconversion.enable: 'true'
    message.format.version: 2.5-IV0
    message.timestamp.difference.max.ms: '9223372036854775807'
    message.timestamp.type: CreateTime
    min.cleanable.dirty.ratio: '0.5'
    min.compaction.lag.ms: '0'
    min.insync.replicas: '1'
    preallocate: 'false'
    retention.bytes: '-1'
    retention.ms: '604800000'
    segment.bytes: '1073741824'
    segment.index.bytes: '10485760'
    segment.jitter.ms: '0'
    segment.ms: '604800000'
    unclean.leader.election.enable: 'false'
- name: TutorialTopic3
  partitions: 1
  replication_factor: 1
  configs:
    cleanup.policy: delete
    compression.type: producer
    delete.retention.ms: '86400000'
    file.delete.delay.ms: '60000'
    flush.messages: '9223372036854775807'
    flush.ms: '9223372036854775807'
    follower.replication.throttled.replicas: ''
    index.interval.bytes: '4096'
    leader.replication.throttled.replicas: ''
    max.compaction.lag.ms: '9223372036854775807'
    max.message.bytes: '1048588'
    message.downconversion.enable: 'true'
    message.format.version: 2.5-IV0
    message.timestamp.difference.max.ms: '9223372036854775807'
    message.timestamp.type: CreateTime
    min.cleanable.dirty.ratio: '0.5'
    min.compaction.lag.ms: '0'
    min.insync.replicas: '1'
    preallocate: 'false'
    retention.bytes: '-1'
    retention.ms: '604800000'
    segment.bytes: '1073741824'
    segment.index.bytes: '10485760'
    segment.jitter.ms: '0'
    segment.ms: '604800000'
    unclean.leader.election.enable: 'false'
- name: TutorialTopic
  partitions: 1
  replication_factor: 1
  configs:
    cleanup.policy: delete
    compression.type: producer
    delete.retention.ms: '86400000'
    file.delete.delay.ms: '60000'
    flush.messages: '9223372036854775807'
    flush.ms: '9223372036854775807'
    follower.replication.throttled.replicas: ''
    index.interval.bytes: '4096'
    leader.replication.throttled.replicas: ''
    max.compaction.lag.ms: '9223372036854775807'
    max.message.bytes: '1048588'
    message.downconversion.enable: 'true'
    message.format.version: 2.5-IV0
    message.timestamp.difference.max.ms: '9223372036854775807'
    message.timestamp.type: CreateTime
    min.cleanable.dirty.ratio: '0.5'
    min.compaction.lag.ms: '0'
    min.insync.replicas: '1'
    preallocate: 'false'
    retention.bytes: '-1'
    retention.ms: '604800000'
    segment.bytes: '1073741824'
    segment.index.bytes: '10485760'
    segment.jitter.ms: '0'
    segment.ms: '604800000'
    unclean.leader.election.enable: 'false'

Attached are the captures.

image

Thank you very much in advance.

Greetings.

Feature: same syntaxe for user ACLs and role ACLs

User ACLs are declared like this:

    - principal: 'User:myuser'
      roles: []
      permissions:
        - resource:
            type: topic
            pattern: mytopic
            pattern_type: PREFIXED
          allow_operations: ['READ:*', 'WRITE:*']

While the role permission are defined like this:

    - name: myrole
      resource:
        type: topic
        pattern: mytopic
        pattern_type: LITERAL
      allow_operations: ['READ:*', 'WRITE:*']

So a role is basically an alias for a single ACL.

What about having the same structure as users and be able to put multiple permissions in a role?

    - name: myrole
      permissions:
        - resource:
            type: topic
            pattern: mytopic
            pattern_type: PREFIXED
          allow_operations: ['READ:*', 'WRITE:*']

The role would be a group of users having the same permissions

Connect over SSL

This might be naive question, but how can I connect to my Kafka cluster over SSL. Admin Config does not support ssl.* options :(

WARN  o.a.k.c.admin.AdminClientConfig - The configuration 'ssl.truststore.location' was supplied but isn't a known config.
WARN  o.a.k.c.admin.AdminClientConfig - The configuration 'ssl.keystore.password' was supplied but isn't a known config.
WARN  o.a.k.c.admin.AdminClientConfig - The configuration 'ssl.key.password' was supplied but isn't a known config.
WARN  o.a.k.c.admin.AdminClientConfig - The configuration 'ssl.keystore.location' was supplied but isn't a known config.
WARN  o.a.k.c.admin.AdminClientConfig - The configuration 'ssl.truststore.password' was supplied but isn't a known config.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.