Giter VIP home page Giter VIP logo

azure-functions-kafka-extension's Introduction

Azure Functions extensions for Apache Kafka

Branch Status
master Build Status
dev Build Status

This repository contains Kafka binding extensions for the Azure WebJobs SDK. The communication with Kafka is based on library Confluent.Kafka.

Please find samples here

DISCLAIMER: This library is supported in the Premium Plan along with support for scaling as Go-Live - supported in Production with a SLA. It is also fully supported when using Azure Functions on Kubernetes where scaling will be handed by KEDA - scaling based on Kafka queue length. It is currently not supported on the Consumption plan (there will be no scale from zero) - this is something the Azure Functions team is still working on.

Quick Start

This library provides Quick Start for each language. General information of the samples, refer to:

Language Description Link DevContainer
C# C# precompiled sample with Visual Studio Readme No
Java Java 8 sample Readme Yes
JavaScript Node 12 sample Readme Yes
PowerShell PowerShell 6 Sample Readme No
Python Python 3.8 sample Readme Yes
TypeScript TypeScript sample (Node 12) Readme Yes

The following direction is for C#. However, other languages work with C# extension. You can refer to the configuration parameters.

Bindings

There are two binding types in this repo: trigger and output. To get started using the extension in a WebJob project add reference to Microsoft.Azure.WebJobs.Extensions.Kafka project and call AddKafka() on the startup:

static async Task Main(string[] args)
{
  var builder = new HostBuilder()
        .UseEnvironment("Development")
        .ConfigureWebJobs(b =>
        {
            b.AddKafka();
        })
        .ConfigureAppConfiguration(b =>
        {
        })
        .ConfigureLogging((context, b) =>
        {
            b.SetMinimumLevel(LogLevel.Debug);
            b.AddConsole();
        })
        .ConfigureServices(services =>
        {
            services.AddSingleton<Functions>();
        })
        .UseConsoleLifetime();

    var host = builder.Build();
    using (host)
    {
        await host.RunAsync();
    }
}

public class Functions
{
    const string Broker = "localhost:9092";
    const string StringTopicWithOnePartition = "stringTopicOnePartition";
    const string StringTopicWithTenPartitions = "stringTopicTenPartitions";

    /// <summary>
    /// Trigger for the topic
    /// </summary>
    public void MultiItemTriggerTenPartitions(
        [KafkaTrigger(Broker, StringTopicWithTenPartitions, ConsumerGroup = "myConsumerGroup")] KafkaEventData<string> events,
        ILogger log)
    {
        foreach (var kafkaEvent in events)
        {
            log.LogInformation(kafkaEvent.Value);
        }
    }
}

Trigger Binding

Trigger bindings are designed to consume messages from a Kafka topics.

public static void StringTopic(
    [KafkaTrigger("BrokerList", "myTopic", ConsumerGroup = "myGroupId")] KafkaEventData<string>[] kafkaEvents,
    ILogger logger)
{
    foreach (var kafkaEvent in kafkaEvents)
        logger.LogInformation(kafkaEvent.Value);
}

Kafka messages can be serialized in multiple formats. Currently the following formats are supported: string, Avro and Protobuf.

Avro Binding Support

The Kafka trigger supports two methods for consuming Avro format:

  • Specific: where the concrete user defined class will be instantiated and filled during message deserialization
  • Generic: where the user provides the avro schema and a generic record is created during message deserialization

Using Avro specific

  1. Define a class that inherits from ISpecificRecord.
  2. The parameter where KafkaTrigger is added should have a value type of the class defined in previous step: KafkaEventData<MySpecificRecord>
public class UserRecord : ISpecificRecord
{
    public const string SchemaText = @"    {
  ""type"": ""record"",
  ""name"": ""UserRecord"",
  ""namespace"": ""KafkaFunctionSample"",
  ""fields"": [
    {
      ""name"": ""registertime"",
      ""type"": ""long""
    },
    {
      ""name"": ""userid"",
      ""type"": ""string""
    },
    {
      ""name"": ""regionid"",
      ""type"": ""string""
    },
    {
      ""name"": ""gender"",
      ""type"": ""string""
    }
  ]
}";
    public static Schema _SCHEMA = Schema.Parse(SchemaText);

    [JsonIgnore]
    public virtual Schema Schema => _SCHEMA;
    public long RegisterTime { get; set; }
    public string UserID { get; set; }
    public string RegionID { get; set; }
    public string Gender { get; set; }

    public virtual object Get(int fieldPos)
    {
        switch (fieldPos)
        {
            case 0: return this.RegisterTime;
            case 1: return this.UserID;
            case 2: return this.RegionID;
            case 3: return this.Gender;
            default: throw new AvroRuntimeException("Bad index " + fieldPos + " in Get()");
        };
    }
    public virtual void Put(int fieldPos, object fieldValue)
    {
        switch (fieldPos)
        {
            case 0: this.RegisterTime = (long)fieldValue; break;
            case 1: this.UserID = (string)fieldValue; break;
            case 2: this.RegionID = (string)fieldValue; break;
            case 3: this.Gender = (string)fieldValue; break;
            default: throw new AvroRuntimeException("Bad index " + fieldPos + " in Put()");
        };
    }
}


public static void User(
    [KafkaTrigger("BrokerList", "users", ConsumerGroup = "myGroupId")] KafkaEventData<UserRecord>[] kafkaEvents,
    ILogger logger)
{
    foreach (var kafkaEvent in kafkaEvents)
    {
        var user = kafkaEvent.Value;
        logger.LogInformation($"{JsonConvert.SerializeObject(kafkaEvent.Value)}");
    }
}

Using Avro Generic

  1. In KafkaTrigger attribute set the value of AvroSchema to the string representation of it.
  2. The parameter type used with the trigger must be of type KafkaEventData<GenericRecord>.

The sample function contains 1 consumer using avro generic. Check the class AvroGenericTriggers

public static class AvroGenericTriggers
{
      const string PageViewsSchema = @"{
  ""type"": ""record"",
  ""name"": ""pageviews"",
  ""namespace"": ""ksql"",
  ""fields"": [
    {
      ""name"": ""viewtime"",
      ""type"": ""long""
    },
    {
      ""name"": ""userid"",
      ""type"": ""string""
    },
    {
      ""name"": ""pageid"",
      ""type"": ""string""
    }
  ]
}";

[FunctionName(nameof(PageViews))]
public static void PageViews(
    [KafkaTrigger("BrokerList", "pageviews", AvroSchema = PageViewsSchema, ConsumerGroup = "myGroupId")] KafkaEventData<GenericRecord> kafkaEvent,
    ILogger logger)
{
    if (kafkaEvent.Value != null)
    {
        // Get the field values manually from genericRecord (kafkaEvent.Value)
    }
}

Protobuf Binding Support

Protobuf is supported in the trigger based on the Google.Protobuf nuget package. To consume a topic that is using protobuf as serialization set the TValue generic argument to be of a type that implements Google.Protobuf.IMessage. The sample producer has a producer for topic protoUser (must be created). The sample function has a trigger handler for this topic in class ProtobufTriggers.

public static class ProtobufTriggers
{
    [FunctionName(nameof(ProtobufUser))]
    public static void ProtobufUser(
        [KafkaTrigger("BrokerList", "protoUser", ConsumerGroup = "myGroupId")] KafkaEventData<ProtoUser>[] kafkaEvents,
        ILogger logger)
    {
        foreach (var kafkaEvent in kafkaEvents)
        {
            var user = kafkaEvent.Value;
            logger.LogInformation($"{JsonConvert.SerializeObject(user)}");
        }
    }
}

Output Binding

Output binding are designed to produce messages to a Kafka topic. It supports different keys and values types. Avro and Protobuf serialisation are built-in.

[FunctionName("ProduceStringTopic")]
public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    [Kafka("stringTopicTenPartitions", BrokerList = "LocalBroker")] IAsyncCollector<KafkaEventData<string>> events,
    ILogger log)
{
    var kafkaEvent = new KafkaEventData<string>()
    {
        Value = await new StreamReader(req.Body).ReadToEndAsync(),
    };

    await events.AddAsync(kafkaEvent);

    return new OkResult();
}

To set a key value use KafkaEventData<string, string> to define a key of type string (supported key types: int, long, string, byte[]).

To produce messages using Protobuf serialisation use a KafkaEventData<MyProtobufClass> as message type. MyProtobufClass must implements the IMessage interface.

For Avro provide a type that implements ISpecificRecord. If nothing is defined the value will be of type byte[] and no key will be set.

Configuration

Customization of the Kafka extensions is available in the host file. As mentioned before, the interface to Kafka is built based on Confluent.Kafka library, therefore some of the configuration is just a bridge to the producer/consumer.

{
  "version": "2.0",
  "extensions": {
    "kafka": {
      "maxBatchSize": 100
    }
  }
}

Configuration Settings

Confluent.Kafka is based on librdkafka C library. Some of the configuration required by the library is exposed by the extension in this repository. The complete configuration for librdkafka can be found here.

Extension configuration

Setting Description Default Value
MaxBatchSize Maximum batch size when calling a Kafka trigger function 64
SubscriberIntervalInSeconds Defines the minimum frequency in which messages will be executed by function. Only if the message volume is less than MaxBatchSize / SubscriberIntervalInSeconds 1
ExecutorChannelCapacity Defines the channel capacity in which messages will be sent to functions. Once the capacity is reached the Kafka subscriber will pause until the function catches up 1
ChannelFullRetryIntervalInMs Defines the interval in milliseconds in which the subscriber should retry adding items to channel once it reaches the capacity 50

librdkafka configuration

The settings exposed here are targeted to more advanced users that want to customize how librdkafka works. Please check the librdkafka documentation for more information.

Setting librdkafka property Trigger or Output
ReconnectBackoffMs reconnect.backoff.max.ms Trigger
ReconnectBackoffMaxMs reconnect.backoff.max.ms Trigger
StatisticsIntervalMs statistics.interval.ms Trigger
SessionTimeoutMs session.timeout.ms Trigger
MaxPollIntervalMs max.poll.interval.ms Trigger
QueuedMinMessages queued.min.messages Trigger
QueuedMaxMessagesKbytes queued.max.messages.kbytes Trigger
MaxPartitionFetchBytes max.partition.fetch.bytes Trigger
FetchMaxBytes fetch.max.bytes Trigger
AutoCommitIntervalMs auto.commit.interval.ms Trigger
AutoOffsetReset auto.offset.reset Trigger
LibkafkaDebug debug Both
MetadataMaxAgeMs metadata.max.age.ms Both
SocketKeepaliveEnable socket.keepalive.enable Both
LingerMs linger.ms Output

NOTE: MetadataMaxAgeMs default is 180000 SocketKeepaliveEnable default is true otherwise, the default value is the same as the Configuration properties. The reason of the default settings, refer to this issue. NOTE: AutoOffsetReset default is Earliest. Allowed Values are Earliest and Latest.

If you are missing an configuration setting please create an issue and describe why you need it.

Connecting to a secure Kafka broker

Both, trigger and output, can connect to a secure Kafka broker. The following attribute properties are available to establish a secure connection:

Setting librdkafka property Description
AuthenticationMode sasl.mechanism SASL mechanism to use for authentication
Username sasl.username SASL username for use with the PLAIN and SASL-SCRAM
Password sasl.password SASL password for use with the PLAIN and SASL-SCRAM
Protocol security.protocol Security protocol used to communicate with brokers
SslKeyLocation ssl.key.location Path to client's private key (PEM) used for authentication
SslKeyPassword ssl.key.password Password for client's certificate
SslCertificateLocation ssl.certificate.location Path to client's certificate
SslCaLocation ssl.ca.location Path to CA certificate file for verifying the broker's certificate

Username and password should reference a Azure function configuration variable and not be hardcoded.

Language support configuration

For the non-C# languages, you can specify cardinality for choosing if the KafkaTrigger is executed in batch.

Setting Description Option
cardinality Set to many in order to enable batching. If omitted or set to one, a single message is passed to the function. For Java functions, if you set "MANY", you need to set a dataType. "ONE", "MANY"
dataType For java functions, the type of the deserialize a kafka event. It requires when you use cardinality = "MANY" "string", "binary"

Linux Premium plan configuration

Currently when running a function in a Linux Premium plan environment there will be an error indicating that we could not load the librdkafka library. To address the problem, at least for now, please add the setting below. It will include the extension location as one of the paths where libraries are searched. We are working on avoiding this setting in future releases.

Setting Value Description
LD_LIBRARY_PATH /home/site/wwwroot/bin/runtimes/linux-x64/native Librakafka library path

.NET Quickstart

For samples take a look at the samples folder.

Connecting to Confluent Cloud in Azure

Connecting to a managed Kafka cluster as the one provided by Confluent in Azure requires a few additional steps:

  1. In the function trigger ensure that Protocol, AuthenticationMode, Username, Password and SslCaLocation are set.
public static class ConfluentCloudTrigger
{
    [FunctionName(nameof(ConfluentCloudStringTrigger))]
    public static void ConfluentCloudStringTrigger(
        [KafkaTrigger("BootstrapServer", "my-topic",
            ConsumerGroup = "azfunc",
            Protocol = BrokerProtocol.SaslSsl,
            AuthenticationMode = BrokerAuthenticationMode.Plain,
            Username = "ConfluentCloudUsername",
            Password = "ConfluentCloudPassword")]
        KafkaEventData<string> kafkaEvent,
        ILogger logger)
    {
        logger.LogInformation(kafkaEvent.Value.ToString());
    }
}
  1. In the Function App application settings (or local.settings.json during development), set the authentication credentials for your Confluent Cloud environment
    BootstrapServer: should contain the value of Bootstrap server found in Confluent Cloud settings page. Will be something like "xyz-xyzxzy.westeurope.azure.confluent.cloud:9092".
    ConfluentCloudUsername: is you API access key, obtained from the Confluent Cloud web site. ConfluentCloudPassword: is you API secret, obtained from the Confluent Cloud web site.

Testing

This repo includes unit and end to end tests. End to end tests require a Kafka instance. A quick way to provide one is to use the Kafka quick start example mentioned previously or use a simpler single node docker-compose solution (also based on Confluent Docker images):

Getting simple single node Kafka running:

docker-compose -f ./test/Microsoft.Azure.WebJobs.Extensions.Kafka.EndToEndTests/kafka-singlenode-compose.yaml up -d

To shutdown the single node Kafka:

docker-compose -f ./test/Microsoft.Azure.WebJobs.Extensions.Kafka.EndToEndTests/kafka-singlenode-compose.yaml down

By default end to end tests will try to connect to Kafka on localhost:9092. If your Kafka broker is located in a different location create a local.appsettings.tests.json file in folder ./test/Microsoft.Azure.WebJobs.Extensions.Kafka.EndToEndTests/ overwritting the value of LocalBroker setting like the example below:

{
    "LocalBroker": "location-of-your-kafka-broker:9092"
}

Error Handling and Retries

Handling errors in Azure Functions is important to avoid lost data, missed events, and to monitor the health of your application. It's also important to understand the retry behaviors of event-based triggers.

Retries

Kafka Extensions supports the Function Level retries, it is evaluated when a trigger function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.

Retry Strategies

There are two retry strategies supported by policy that you can configure :-

1. Fixed Delay

A specified amount of time is allowed to elapse between each retry.

2. Exponential Backoff

The first retry waits for the minimum delay. On subsequent retries, time is added exponentially to the initial duration for each retry, until the maximum delay is reached. Exponential back-off adds some small randomization to delays to stagger retries in high-throughput scenarios.

For more info please check official doc

azure-functions-kafka-extension's People

Contributors

actions-user avatar aloiva avatar amamounelsayed avatar anirudhgarg avatar brandonh-msft avatar dependabot[bot] avatar estatheo avatar fabiocav avatar fbeltrao avatar github-actions[bot] avatar gliljas avatar jainharsh98 avatar komayama avatar krishna-kariya avatar lpapudippu avatar mcollier avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar msftgits avatar mukundnigam avatar natsby avatar priyaananthasankar avatar raorugan avatar ryancrawcour avatar shrohilla avatar thomasmanson avatar tomkerkhove avatar tsuyoshiushio avatar vivekjilla avatar weshaggard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-functions-kafka-extension's Issues

Should review serialisation

Currently we support built-in serialisation for avro and protobuf.
Avro relies on Confluent.Kafka. Protobuf relies on google.protobuf.

Having serialisation built-in has the following advantages:

  • Performance when using a language worker, removing the need to serialising byte[] to language worker and do the serialisation there
  • Simplicity: user doesn't have to come up with much code to get it going

Disadvantages:

  • Opinionated: we use specific libraries for serialisation. Currently there is no way to inject a different one. Using specific library versions can cause problems when building functions that depends on a different versions of the library.

Nice to have custom retry configuration for checkpointing

Config option that would allow you to specify a retry of a batch. So if the batch results in an exception, instead of checkpointing, retry the batch.

Set the number of retries. At least 1 retry? At least 5 retries? Unlimited retries?

Should refactor to accomodate multi-language support

I talked with Azure Functions product team about the Java bindings, and we found out that current design doesn't support multi language support.

e.g. We have "Type" for KafkaTriggerAttribute.

We can only use Basic types and POCO for that. I'll set a meeting to discuss about this. I post an issue not to forget this.

dynamic binding fails

When trying to bind to a KafkaAsyncCollector via the IBinder Binder.BindAsync<> with the KafkaAttribute the following exception is thrown

System.Private.CoreLib: Exception while executing function: ….. 
Microsoft.Azure.WebJobs.Host: Can't bind Kafka to type 'Microsoft.Azure.WebJobs.Extensions.Kafka.KafkaAsyncCollector'.

Please provide an example to show how to dynamically bind to the KafkaAsyncCollector

Must have a custom scale controller

Until this extension is supported by the Functions scale controller we will need logic to handle scale.

Something (a web app or a web job, running in the same App Svc?) will need to check the "queue length" in Kafka topic(s) configured and determine if the current number of Function instances are keeping up adequately.

If we're falling behind, we need to scale out.
If we're ahead, or have drained the queue then we need to scale back in.

@jeffhollan & @anirudhgarg can you please add some details to this requirement that will get us to a MVP scale controller.

Should have TLS client certificates auth

Cert based auth is very typical with Kafka especailly when you have func running on a container or on Azure where Kafka clusters are running on VM/Confulent Cloud. Having this support could be important to target production workloads.

Javascript / Typescript example for input and output bindings

As a developer, I would like to have an example that shows me how to to bind inputs and outputs with Kafka such that I can quickly build an Azure Function with Kafka.

There is this example from a KEDA repo but it doesn't have:

  • An example of how you configure an output connection to Kafka and send one or more messages to that output connection.
  • An example of how can process multiple input messages with Kafka in one function execution.

I'm happy to send a PR if this would be helpful. Let me know where the right spot is to put this example (in this repo or expand the KEDA sample) and rough notes on how to do this with the current interface (if possible).

Must have testing documentation

Add documentation about testing giving instructions to people on how to run End to end tests locally.
Ideally create required topics from code to simplify E2E setup.

Decide if we have a strong named sign

Ideally, our nuget package should have strong named sign. However, We use several libraries which is not have strong named sign.

I send request confluent and directly talk with them. They said they can do it.
confluentinc/confluent-kafka-dotnet#879

I'm planning that we can publish the first version without strong named sign, however, if they quickly introduce the strong named sign, then I'd like to go with strong named sign from the first version.

One of the downside of not having strong named sign is, in the future, if you change it to have a strong named sign, it cause a breaking change. Since we release it as alpha it might be ok, however, if they provide one very fast, I'd happy to start with signing.

Must have documentation

Must document usage of trigger
Also document all the config options avail in host.json
Must have working end to end sample

Nice to have deadlettering

Include an optional setting where you can specify a deadletter Kafka stream in your config somewhere and it will deadletter to that for you after retries have been exceeded.

Must have Stylecop in project

Use Stylecop so all team members follow the same coding rules. Start from what has been used in EventHubs extension

Nice to have: create topics if it does not exist

It would be nice to create topic on the fly if it does not exist.
That would require topic information such as name, partition count, replication factor, etc.

When defining the attribute there should be a specific parameter indicating if creating is allowed.

Must be able to bind to different types in trigger functions

The trigger implementation has been tested using KafkaEventData and string as parameter types.
We must implement the following scenarios:

POCO (importance: high)

  • If the POCO class implements ISpecificRecord. Avro deserialiser should be set when creating the KafkaListener
  • If the POCO class implements IMessage (Google.Protobuf contract). The protobuf deserialiser should be set when creating the KafkaListener

byte[] (importance: high)

  • Allows deserialisation to be implemented directly in the function.

IGenericRecord (importance: low)

  • If an Avro schema was provided and getting the fields will be implemented direct in the function. The Avro deserialiser should be set during KafkaListener creation

string (importance: very low)

  • If an Avro schema was provided we should return a JSON presentation of the object (currently it only does 1-level depth)
  • If a Protobuf contract was supplied we could return a JSON presentation of the object (currently it only does 1-level depth)

support key, partition, offset, timestamp and topic as stand-alone parameters (importance: medium)

  • In single dispatches
  • In multi dispatch

Should use the "new" way of writing bindings

Currently uses the "old" method of writing bindings, which some earlier bindings (SB/Storage/etc) all still use.

There is a newer way to do this that doesn't require nearly as much code, and there is support for open generics.

You can see an example in the cosmos binding for a collector, which starts here: https://github.com/Azure/azure-webjobs-sdk-extensions/blob/dev/src/WebJobs.Extensions.CosmosDB/Config/CosmosDBExtensionConfigProvider.cs#L56

There's also some documentation on how to do this here: https://github.com/Azure/azure-webjobs-sdk/wiki/Creating-custom-input-and-output-bindings#binding-to-generic-types-with-opentypes

We should update this "old" method of doing bindings to the "new" method.

Should support message headers

Add support to message header properties in Kafka events.

Trigger

  • Add headers to KafkaEventData
  • Expose properties in KafkaEventData and binding properties

Output

  • Enable creation of Kafka messages with headers

Verify checkpoint saving strategy

Checkpoint saving current is done using Consumer.Commit which blocks the thread. An alternative is to use StoreOffset that will save the checkpoint asynchronously in librdkafka.

Commit is more accurate while StoreOffset offers a better throughput.

Would love your feedback @jeffhollan, @anirudhgarg and @ryancrawcour

Must have each consumer lock on a single partition

It’s less around a function requirement and more around a Kafka limitation.
Because you may have 5 independent function instances running at the same time, Kafka only allows one reader per partition per consumer group at one time.

In Event Hubs we leverage an SDK called the “EventProcessorHost.” This automatically helps coordinate what partitions are locked by what instances. So if only 1 instance is active it will let that 1 instance lock all of them. Once a 2nd pops up and tries to connect it will rebalance and let consumers know.
I don’t know exactly how well do that in Kafka - I believe there’s a concept of a “leader” that needs to assign partitions. So in the example above if only one function is active it would by default be the leader.
As soon as a 2nd gets scaled out, Kafka would ask the leader (instance 1) how many partitions should go to #2 and how many should stay with #1.

So I expect the trigger would need “leader logic” so at any time any instance could be the leader, and as a leader it just evenly distributes partitions. Again I’m not positive how exactly it’ll work in Kafka as we rely on the event processor host SDK, but this is what I've pieced together

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.