Giter VIP home page Giter VIP logo

Comments (25)

dlorenc avatar dlorenc commented on July 17, 2024 12

Hey everyone! A few of us met yesterday to discuss this proposal and how to make it work well with knative/eventing. Here's what we discussed/agreed on:

@vtereso @ncskier @iancoffey @EliZucker

  • We like knative/eventing!
  • We think that the model in the initial proposal just works with it out of the box, since we'll expose an Addressable for every EventListener
  • This will let people use knative/eventing automatically if they want to, and consume CloudEvents through that
  • We'll continue the plan to support basic HTTP requests/webhooks directly in Tekton Triggers, since those also work with knative/eventing
  • This gets us support for Github and most other things out of the box with no external dependencies
  • We're not actually going to automatically configure Github webhooks as part of this. That could be a helper script or higher level type, or knative eventing
  • We're adding support for http header matching to the TriggerBinding type. This can be used to validate requests actually came from Github, or another system
  • We'll continue adding support for emitting CloudEvents from Pipelines and Tasks, both as a first-class feature of Tekton and through the CloudEvent resource directly
  • We'll also look to add first-class support for CloudEvents, in addition to HTTP requests later as Tekton matures and we see more concrete use-cases

from triggers.

n3wscott avatar n3wscott commented on July 17, 2024 1

@vincent-pli

I saw some components in Knative-contrib just create deployment not expose to outer world, is there any consider for it?

Some things are pull based and the deployment acts as a way to request out and convert pull/poll based workloads into push based workloads. Maybe that is what you are asking?

If the question is around serverless aspects of it, we still have some work to do with regards to how to scale down active components...

from triggers.

dlorenc avatar dlorenc commented on July 17, 2024

Thanks for the detailed report @vtereso! I'm convinced by everything here.

from triggers.

vincent-pli avatar vincent-pli commented on July 17, 2024

This is accomplished through user provided event bindings where the structure of the data must be known.

That’s convincing description, thanks.

from triggers.

iancoffey avatar iancoffey commented on July 17, 2024

I think its all perspective, really. Pardon the rambling nature of my reply 😄

tl;dr accepting raw events directly from event sources is certainly simpler, but in practice, I think it might come at the expense of being useful.

A different way to look at this is that until we accept CloudEvents, we are only supporting the simplest use-case of people actually directly hitting the Trigger Listener from the raw event sources This is actually a use-case I have not really considered until now. Mostly because going directly to TriggersListener from sources isnt a solution to the eventing systems I hoped this project would solve. Hence why I have always pictured the tektonlistener (and now Triggers) to be a consumer of events, unconcerned with their origin.

If we do not accept CloudEvents, we can not trigger PipelineRuns based off Knative or Argo (or ce-gateway fwiw) event processing - users can not trigger Pipeline runs based on output of eventing systems channels/brokers/etc, make use of existing eventsources or clusters. In other words, if we choose to not accept cloudevents, ONLY this simple workflow can ever work, since we can't leverage existing projects horsepower and tools to do more interesting things. This means this project won't fit my use-case anymore or the use-case of anyone using an existing eventing system and hoping to trigger PipelineRuns from them. That is a major bummer for ths project IMO.

I dont think its a question of whether to accept CloudEvent or not, but whether we will become a good fit and “plug in” to the ecosystem or exclude the existing eventing projects and do everything in-house. We could settle on another event payload spec for events, but CE is the one that gives us the most bang for the buck at the moment (maybe its not?).

From my perspective, the most direct way forward for Triggers to become generally useful is for the TriggerListener to be an addressable sink, like any other, and we keep that as our default little boundary and interface with the rest of the world. Then the projects focus and opportunity cost shifts back to the core Triggers business logic we need to write, and make that little part of the project special and innovative. If we defaulted to Cloudevents and wanted to also accept raw github events someday, I see no technical reason why we can't design this system to allow for expansion. Excluding cloudevents - thats a decision we likely cant walk back.

The example yaml above would work well when a message arrives with a header that literally tells us the payload type via http, but what about the events that do not? I do not think this project necessarily has to become so involved in eventing logic. I'm concerned that the effort we put into the detection/normalization/sanitation/parsing will immediately begin to eat into the opportunity cost of actually writing Triggers code.

perspective

I mentioned above my perspective is not based on just receiving events, but of full end-state system, so maybe it makes sense to describe that. My personal hope for this project is that Triggers (directly or indirectly) becomes a Responder to events - not just consuming events but producing them and publishing them to other receivers/responders, both internal and external, on k8s clusters and proprietary services. To me, that is the most exciting aspect of the Triggers project. I hope we dream big and aim to facilitate PipelineRuns based on:

  • external events
  • other pipelineruns
    • (from other clusters)
  • cloud providers events (azure/aws/etc)
  • combos of successfully received events
    • combos of successfully received /and/ sent events?

That would meet my requirements for the types of systems I envision Triggers being a central cog - huge, complex and automated systems. It also serves the small use-case by just having a tiny gateway in front or by accepting raw events (if we really have to, and I dont think we will). We could do all of that without Cloudevents, but I think the result would be us implementing a lot of code that looks a lot like Cloudevents.

Ironically, if we go the other way and move forward supporting raw events, that means yet another project becomes necessary to plug this gap in functionality 🤣

from triggers.

bobcatfish avatar bobcatfish commented on July 17, 2024

Thanks for writing this all up @vtereso and @iancoffey ! @iancoffey thanks for providing us with concrete use cases and stopping us from being too theoretical 😉 and @vtereso I want to give you a quick shout out for a very thorough, clear, and convincing proposal! Very nicely done imo :D

TL;DR: from what I can tell this proposal actually expands what we can support, from just CloudEvents, to CloudEvents and other arbitrary json HTTP POST based events. So I really like it because it seems to me like we'd be able to support more use cases, not less, and we would be less tied to both CloudEvents, and sources of CloudEvent adapter logic like Knative Eventing.

Excluding cloudevents - thats a decision we likely cant walk back.

I agree that we wouldnt want to exclude cloudevents! The way that I see this proposal, I actually think we wouldn't be excluding cloudevents - if anything we'd be expanding the scope of what we support to be cloudevents AND other arbitrary event formats.

As far as I can tell (jump in if I'm wrong @vtereso !!), this proposal suggests really only one major change to our design: instead of including a (cloud event) type in TriggerBinding, we remove that and basically support both:

  1. A use case where we rely on the user to configure the Event source to call the correct EventListener endpoint (e.g. if they want to handle GitHub push events with their EventListener, they configure GitHub push events to hit that endpoint)
  2. A use case where we provide the user with a filtering/mapping mechanism in the EventListener config that allows them to control which events are mapped to which trigger bindings (i.e. allowing them to use one EventListener for multiple types), e.g. something like:
apiVersion: tekton.dev/v1alpha1
kind: EventListener
...
spec:
  triggerbindings:
    - triggerBindingRef: simple-pipeline-binding
      condition: event-is-github-push # This would be an optional field that would use a Pipeline Condition to introspect a payload

(Or we could provide something more explicit that is json based for the filtering/mapping?)

I'm assuming that in this version, one can still use an EventListener as an addressable endpoint, so there is nothing to prevent us from consuming and responding to CloudEvents.

We could settle on another event payload spec for events, but CE is the one that gives us the most bang for the buck at the moment (maybe its not?).

I definitely think if we want to settle on an event payload spec, we should use cloudevents and not try to make our own.

Part of the motivation for this change tho is it's just not clear how much cloudevents actually gives us - and maybe you can provide more insight here @iancoffey if I'm missing something, but looking at the CloudEvents spec it seems like it only really guarantees us a payload will have:

  • id
  • source
  • specversion
  • type

Of those fields, we were planning to use "type" so we could tell what kind of event we were receiving (e.g. GitHub push) and map that to a TriggerBinding. But as soon as we start trying to do more, we are basically dealing with arbitrary json, e.g. using the example currently in our TriggerBinding doc:

  params:
    - name: gitrevision
      value: ${event.head_commit.id}
    - name: gitrepositoryurl
      value: ${event.repository.url}

event.head_commit.id and event.repository.url are not going to be values that the CloudEvent spec provides us with, i.e. our Binding logic is going to need to know how to introspect arbitrary json payloads anyway.

From my perspective, the most direct way forward for Triggers to become generally useful is for the TriggerListener to be an addressable sink, like any other, and we keep that as our default little boundary and interface with the rest of the world.

I agree that the EventListener should be an addressable sink - in @vtereso 's proposal, he suggests we put a LoadBalancer in front of our EventListener Deployment - I'm assuming the LoadBalancer can be an addressable sink? If not then I agree we should change that design - i.e. we should have a requirement that our endpoint can be an addressable sink, assuming we can use that same endpoint for other arbitrary systems that will emit their events via a HTTP POST.

The big unknown from me (which I think @EliZucker is looking into?) is how many event sources expect to send an event via an HTTP POST. Putting aside the format of the CloudEvent, one benefit of using something like Knative Eventing between the producer of events and triggers is that (I'm assuming) Knative Eventing would plan to provide us with components (adapters?) that would let us consume other stuff like pub sub events, data off queues, etc.

However since I think this design doesn't preclude Knative Eventing, and I'm also assuming most of the events folks will want to start using immediately are going to be via HTTP POST (maybe I'm wrong tho @iancoffey - maybe you have other event sources you need to use immediately that don't fit here?), then what I really like about this design, like @vtereso said is that it lets us make progress without tying us to CloudEvents.

My personal hope for this project is that Triggers (directly or indirectly) becomes a Responder to events - not just consuming events but producing them and publishing them to other receivers/responders, both internal and external, on k8s clusters and proprietary services.

I like that idea! I think that's part of what @afrittoli 's addition of a CloudEvent PipelineResource will help us move toward.

Even if we don't require consuming CloudEvents, I agree that if we emit an event, it probably should be a CloudEvent.

It doesn't seem to be tenable in a multiple event setup, even a small one.
In other words, if we choose to not accept cloudevents, ONLY this simple workflow can ever work, since we can't leverage existing projects. This means this project won't fit my use-case anymore or the use-case of anyone using an existing eventing system and hoping to trigger PipelineRuns from them. That is a major bummer for ths project IMO.

I'd like to hear more your use case! It's not clear to me what use cases this design precludes that the previous design allows.

from triggers.

bobcatfish avatar bobcatfish commented on July 17, 2024

If I Had More Time, I Would Have Written a Shorter Letter

🙏

from triggers.

bobcatfish avatar bobcatfish commented on July 17, 2024

p.s. If possible it would be great to bring this up and discuss it in the working group meeting this wednesday, even if we haven't yet decided what we want to do. I'm gonna add it to the agenda :D

from triggers.

vtereso avatar vtereso commented on July 17, 2024

if anything we'd be expanding the scope of what we support to be cloudevents AND other arbitrary event formats.

Yes, I would not want to exclude CloudEvents.
@iancoffey made great points about why it would be dismissive to only enable raw events.
By "first class support" I was referring to whether or not the event being parsed (in the TriggerBinding) is guaranteed to be in the structure of a CloudEvent (by means of an adapter or assumption).

Since event bindings can support any sort of payload, a CloudEvent (as seen here) is no different.
In regards to being able to emit a CloudEvent, the workload (PipelineRun or otherwise) should be able to do that internally, but perhaps it would also be good to consider some sort of extension to the TriggerBinding definition that could do something like emit a CloudEvent callback (on receiving events perhaps).

A different way to look at this is that until we accept CloudEvents, we are only supporting the simplest use-case of people actually directly hitting the Trigger Listener from the raw event sources

As I understand it, since the EventListener is addressable, it could directly process raw events, but it does not have to necessarily. A different addressable endpoint could be used in the hook, which could send the event to the EventListener.

The example yaml above would work well when a message arrives with a header that literally tells us the payload type via http, but what about the events that do not

This is a good point. I thought about this, but I apologize for not adding a comment to address it earlier 😅 I believe there are two ways to "validate" an HTTP(S) event, one being the headers and the other being the payload. In the case of a CloudEvent, the type is specified in the payload and this could be extracted and asserted against a Conditional on the TriggerBinding.

I did not mention Conditionals because I understood this as agreed on (to be added after we can create resources), but I apologize if that made this proposal seem incomplete. Both header matches and Conditionals make sense as optional validation to me, but I could be swayed either way. Also, maybe we could expose the headers using some interpolation variable like ${headers} and do everything with Conditionals, but my intention wasn't to propose anything that specific.

from triggers.

afrittoli avatar afrittoli commented on July 17, 2024

Even if we don't require consuming CloudEvents, I agree that if we emit an event, it probably should be a CloudEvent.

💯
One of the objective that I hope the Triggers project will help us achieve is the ability to trigger pipelines/tasks asynchronously, i.e. where Tekton is the source of the events handled by the listener - e.g. https://github.com/tektoncd/pipeline/blob/master/pkg/reconciler/v1alpha1/taskrun/resources/cloudevent/cloudevent.go#L42.

One of the things that I like about the cloud events spec is the there are specifications available for multiple transport bindings and event formats https://github.com/cloudevents/spec#cloudevents-documents. This allows to decouple the way a message is transported from it's type / payload format. If Tekton defines templates that can extract data from a certain type/payload, it doesn't really matter how the message gets there, as long as there is a "last mile" https transport binding, the message could be getting to Tekton over Kafka or other highly scalable systems.

from triggers.

iancoffey avatar iancoffey commented on July 17, 2024

Ah, I see. This is a great, incredibly verbose and detailed issue with lots going on but thats a good thing! I think that as long as the project can ingest cloudevents and process them into pipelineruns (and not be tied to the one spec, as the proposal states), then the proposal is technically satisfied. We never made a concrete decision to use all the helpers and servers and tools CloudEvents provides (I think?) and it sounds like this change can potentially get us there in a more "freeform" way, which def sounds cool.

Understanding potential changes to the userstories is the thing I most want to understand - if the experience of an operator is better, thats always a great thing!

Some more comments:

TL;DR: from what I can tell this proposal actually expands what we can support, from just CloudEvents, to CloudEvents and other arbitrary json HTTP POST based events.

This was a little confusing to me because it seems to be in the proposal.

instead of including a (cloud event) type in TriggerBinding, we remove that and basically support both:

If this refers to the EventType field, that was kind of like the PipelineTrigger field of this project - it had a purpose but nothing really ever directly needed it, mostly added on speculation and also to make the intent clear that we arent just dealing with cloudevents. I dont think we need to specify that field regardless of how we implement the listener, come to think of it.

then what I really like about this design, like @vtereso said is that it lets us make progress without tying us to CloudEvents.

I may be misunderstanding but like above, this seem like part of the original the plan.

If Tekton defines templates that can extract data from a certain type/payload, it doesn't really matter how the message gets there, as long as there is a "last mile" https transport binding, the message could be getting to Tekton over Kafka or other highly scalable systems.

/nod. Im starting to see the cloudevent pipelineresource is something I am going to have to pay more attention to 👍

Since we wanted to avoid being tied to cloudevents all along - in fact that was the first piece of input I received on the proposal idea - and because it seems like a win for users, this sounds like a pretty reasonable/useful change to me - but I think theres a bit more to discuss nd maybe some assumptions being made Id like to understand

Wrapping can be potentially error prone

Our code to dig into random http payloads may also be error prone. Another:

Will take time to implement

Implementing our own custom event parsing logic will also take time to implement (and is saving time our goal?)

I mentioned above that understanding the changes to the userstories and the workflow that operators will end up needing to go through is the lens i am trying to view this through. If things actually work better and make more sense for the community at large for the long term, then that is the way to go imo.

from triggers.

vincent-pli avatar vincent-pli commented on July 17, 2024

I think positions are based on different angle of view.
For @vtereso , he consider more about implements and availability.

vendors (GitHub, DockerHub, etc.) do not emit CloudEvents

That’s true for now, that means we need put something before Trigger and vendor as a translater, I have not check too many but Knative-contrib, see the list:

* Integrate with GitHub
* Integrate with Pub/Sub
* Integrate with Kafka
* Integrate with AWS SQS
* Integrate with Camel K
* Integrate with Websockets
* Expose an ingress

That’s means if we hit the vendor which not include in the upper list, we need implements ourself’s or seek a third part one (if we are lucky)
Even worse, some of items upper depends on Knative-serving.
So we need support raw event, could we make these support as a plugin or provider and make it configurable?

for @iancoffey , I believe he consider the whole picture: make trigger as part of the event process system.
That’s exactly what’s I think about, trigger will not stand on the chain end of a event process, also could emit event, should be part of the system.
Based on this view, the CouldEvent could be necessary gate pass.
In additional, CloudEvent is well designed, some useful attributes like: “id”, “type”, “source”, “time” is helpful for logging, tracing, monitoring.

So I think, we need support both with a well designed plugin styles.

from triggers.

n3wscott avatar n3wscott commented on July 17, 2024

Hello Tekton friends!

there is a lot in this bug to catch up on and before I do, a couple things:

  • Knative Eventing is no longer coupled to Istio. knative/eventing#294
  • You are signing up for a lot of work to not use the components provided by knative-contrib for integrating with GitHub.
  • There is likely no way an expose service to the internet will be lighter weight than running a Knative Service. It scales to zero.
  • The object model proposed is very complicated and it is not clear why you are reinventing triggers and externally exposed services.

One of the main goals of Knative Eventing is that creds get stored on the edge of the cluster so you don't to manage and validate that on every pipeline invoker... We have experiments that do the same thing in reverse where a thing is the gateway to an external resource and holds the creds for you to interact with it (slack, amqp are examples).

I would encourage your first pass to be http POST invokable with a CloudEvents payload (could be of your choosing) and get that working. Then you can wire that up to the infra that eventing already has in place. Do not reinvent eventing. Come help make Knative what you need or tell us what you need to be successful.

If more of Tekton was http invokable the world could get REALLY interesting...

from triggers.

dlorenc avatar dlorenc commented on July 17, 2024

Thanks for the reply @n3wscott! I have some questions inline:

This is great to see!

  • There is likely no way an expose service to the internet will be lighter weight than running a Knative Service. It scales to zero.

Does scale to zero require Istio?

One of the main goals of Knative Eventing is that creds get stored on the edge of the cluster so you don't to manage and validate that on every pipeline invoker...

I'm not quite sure I understand this, or how this differs from the proposals here. Can you share more?

Do not reinvent eventing.

This is definitely not our goal :) People have been hooking github webhooks up to systems for a long time though, so I don't think it's fair to characterize doing that again as "reinventing eventing".

We need a very low-resource usage, very low external-dependency, very low maintenance method for invoking things based on events from Github/Gitlab/Bitbucket/etc that works out-of-the-box.

We likely also need support for complex eventing systems, like knative/eventing. My guess is that we'll end up with a simple version in-tree to handle most use-cases, as well as support for a system like knative/eventing for users that need more.

from triggers.

vincent-pli avatar vincent-pli commented on July 17, 2024

Thanks @n3wscott

There is likely no way an expose service to the internet will be lighter weight than running a Knative Service. It scales to zero.

I saw some components in Knative-contrib just create deployment not expose to outer world, is there any consider for it?

from triggers.

khrm avatar khrm commented on July 17, 2024

Thanks @n3wscott

* Knative Eventing is no longer coupled to Istio. [knative/eventing#294](https://github.com/knative/eventing/issues/294)

I think the latest Github source doesn't even require Knative-eventing but it does require knative-serving.

@dlorenc

Does scale to zero require Istio?

Yes. It uses knative-serving.

* You are signing up for a lot of work to not use the components provided by knative-contrib for integrating with GitHub.

Yes. That's true.

* There is likely no way an expose service to the internet will be lighter weight than running a Knative Service. It scales to zero.

Agreed but as dlorenc said:

very low external-dependency, very low maintenance method for invoking things based on events from Github/Gitlab/Bitbucket/etc that works out-of-the-box.

Getting events from Github on the scale isn't easy. Are we expecting to write a service to create webhook and handle secret and accesstoken? Same will goes with Gitlab, docker, etc. All of this has been done by eventing-contrib as @n3wscott point out.

That’s means if we hit the vendor which not include in the upper list, we need implements ourself’s or seek a third part one (if we are lucky)

Yes but we will be implementing this in any case if we don't use these knative-eventing sources.

from triggers.

dlorenc avatar dlorenc commented on July 17, 2024

Getting events from Github on the scale isn't easy. Are we expecting to write a service to create webhook and handle secret and accesstoken? Same will goes with Gitlab, docker, etc. All of this has been done by eventing-contrib as @n3wscott point out.

I think in my ideal world we would basically write them for a core set of systems (Github, Gitlab, Bitbucket), then rely on knative/eventing for the rest and more complex ones.

from triggers.

ncskier avatar ncskier commented on July 17, 2024

Are we expecting to write a service to create webhook and handle secret and accesstoken? Same will goes with Gitlab, docker, etc. All of this has been done by eventing-contrib as @n3wscott point out.

I might be misunderstanding this comment, but at the moment, the Triggers project isn't concerned with creating webhooks. Instead, we are only concerned with receiving "events" (which will probably be from webhooks in most cases). So, we are letting the user decide how to create the webhooks; this could be through the GitHub/GitLab/etc web UI, through Knative Eventing, etc.

from triggers.

n3wscott avatar n3wscott commented on July 17, 2024

@dlorenc Hello, nice to meet you!

Does scale to zero require Istio?

No, but it does require some kind of mesh, there are a couple options now.

I'm not quite sure I understand this, or how this differs from the proposals here. Can you share more?

The code that you write does not have to interact with github at all, and github has a maybe once delivery system, and if your deployments are busy or lagging, then you will have to think about queues too, and that is what eventing is for too.

I would recommend thinking of a conical form of a build request with translations from pull_requests from github and/or gitlab...

People have been hooking github webhooks up to systems for a long time though

But I bet they do not validate the call came from github for real. We do the github validation dance before we let it into the cluster. Plus we manage setting up the webhook config in github making that far less to worry about for the dev.

We need a very low-resource usage, very low external-dependency, very low maintenance method for invoking things based on events from Github/Gitlab/Bitbucket/etc that works out-of-the-box.

You can build that system and it will do that one thing really well, or you could integrate with something like Knative who's entire purpose is to bring events from outside the cluster and deliver them in.

I am being greedy here because I really want a slack build bot...

We likely also need support for complex eventing systems, like knative/eventing. My guess is that we'll end up with a simple version in-tree to handle most use-cases, as well as support for a system like knative/eventing for users that need more.

Then I recommend we setup a call where I can explain our ducktype concepts and perhaps even our dep injection method for reconcilers, which could allow you to cherry pick a github or gitlab reconciler into your own controller. Keeping your footprint small (more work on our side is needed for this to be really doable, but it is doable work).

With the ducktypes you leave places to connect, we use them as runtime interfaces and control plane contracts.

from triggers.

n3wscott avatar n3wscott commented on July 17, 2024

@khrm

I think the latest Github source doesn't even require Knative-eventing but it does require knative-serving.

This is true, and an impl detail of that source but if this is a hard no, perhaps come and add a mode to it to use a deployment over a serving service?

Getting events from Github on the scale isn't easy. Are we expecting to write a service to create webhook and handle secret and accesstoken? Same will goes with Gitlab, docker, etc. All of this has been done by eventing-contrib as @n3wscott point out.

To be clear, eventing has more work to be done to support all of what you want. But we plan to do it at some point and we could use help, OR an alternate idea is if Tekton does this work, make those components use the in-development source spec (actively working on it now, basically means your component has spec.sink as a obj ref and maybe also a url to send cloudevents to) And then tekton sources can play with Knative in the current infra we are building.

Yes but we will be implementing this in any case if we don't use these knative-eventing sources.

I expect we will not have everything. But lets keep talking so what you build is compatible with what we are building ... SENERGY!!!

from triggers.

n3wscott avatar n3wscott commented on July 17, 2024

@ncskier

I might be misunderstanding this comment, but at the moment, the Triggers project isn't concerned with creating webhooks. Instead, we are only concerned with receiving "events" (which will probably be from webhooks in most cases). So, we are letting the user decide how to create the webhooks; this could be through the GitHub/GitLab/etc web UI, through Knative Eventing, etc.

If that is the case, really you can drop in the Knative Eventing Contrib github source and point it at your function in the same way you might point github's webhook at your function. The payload is the same with the addition of some cloudevent metadata in the headers. Thats all... And this allows you to also not use any of that and point github directly at the function.

Knative also has triggers and it is not a similar concept. We would call this an invoker or something.

You already have PipelineRun, what if it was PipelineStart as an addressable component... PipelineMain if you want to be cute.

from triggers.

afrittoli avatar afrittoli commented on July 17, 2024

Hello Tekton friends!

Hello! Thank you for your comments on this!

there is a lot in this bug to catch up on and before I do, a couple things:

* Knative Eventing is no longer coupled to Istio. [knative/eventing#294](https://github.com/knative/eventing/issues/294)

* You are signing up for a lot of work to not use the components provided by knative-contrib for integrating with GitHub.

* There is likely no way an expose service to the internet will be lighter weight than running a Knative Service. It scales to zero.

I find it very really nice for services that react to events to scale down to zero 0. Two caveats:

  • This does not come at no cost. The extra complexity and resource consumption of managing knative serving and some service mesh needs to be taken into account
  • There should be some infrastructure in place to trace events and/or capture logs because when the serving scales back to zero logs are lost otherwise

from triggers.

khrm avatar khrm commented on July 17, 2024

Are we expecting to write a service to create webhook and handle secret and accesstoken? Same will goes with Gitlab, docker, etc. All of this has been done by eventing-contrib as @n3wscott point out.

I might be misunderstanding this comment, but at the moment, the Triggers project isn't concerned with creating webhooks. Instead, we are only concerned with receiving "events" (which will probably be from webhooks in most cases). So, we are letting the user decide how to create the webhooks; this could be through the GitHub/GitLab/etc web UI, through Knative Eventing, etc.

Discussion digress to knative-contrib dependency and cloud event, so I wrote that reply.
I think even if we go with the current proposal, it doesn't preclude us from using Knative-contrib sources. As @bobcatfish pointed out that event.head_commit.id and event.repository.url are not going to be values that the CloudEvent spec . So anyway we are going to do the work to get it from webhook events be it cloud events or raw. Even if knative-contrib sends cloud events, we can use cloud event type in conditionals/header match to send to do the appropriate actions.

Both header matches and Conditionals make sense as optional validation to me, but I could be swayed either way. Also, maybe we could expose the headers using some interpolation variable like ${headers} and do everything with Conditionals, but my intention wasn't to propose anything that specific.

@vtereso I think having both is better because the way we distinguish type could come either payload or headers. Even for cloud events, we have both. Either in headers for http binary mode or in body for payload. https://github.com/cloudevents/spec/blob/v0.3/http-transport-binding.md#31-binary-content-mode

from triggers.

vincent-pli avatar vincent-pli commented on July 17, 2024

Seems a little get out of control 😆
Our point in controversy is should we support CloudEvent as the first class.

Seems CloudEvent club has many members:

  • Knative-eventing: Help on event persistence, decouple for publisher and consumer, event router and filter...
  • Knative-contrib: Help on event access/translate from different vendors.
  • Knative-serving: Help expose to outer world

If we decide CloudEvent, we can get the benefit immediately and no extra effort.
Bad thing is we also restrict by them.
What we should do if our requirements conflict with current implements in Knative?
A quick solution is to implements ourself, like this: https://github.com/vincent-pli/gitlabsource (avoid depends on Knative-serving 😭 )

So, I still consist we should support both with a well designed plugin styles.

@n3wscott

You already have PipelineRun, what if it was PipelineStart as an addressable component... PipelineMain if you want to be cute.

I think Trigger in tekton is the PipelineMain or PipelineStart

from triggers.

bobcatfish avatar bobcatfish commented on July 17, 2024

Quick update that this topic is on the agenda for this week's working group meeting so I highly recommend that folks with strong opinions attend and/or add your thoughts to the agenda if possible if you can't attend.

from triggers.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.