Giter VIP home page Giter VIP logo

akka-meta's Introduction

akka-meta

This repository is dedicated to high-level feature discussions and for persisting design decisions.

The intended process is to open an issue about a topic that needs to be discussed and after a conclusion has been reached a PR should be opened that adds a markdown file containing the essence of what has been concluded and why. This PR will then be compared with the issue’s conclusion and merged if it properly reflects the result of the discussion.

akka-meta's People

Contributors

patriknw avatar rkuhn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

akka-meta's Issues

Akka Sprint Plan 2017-04-18

Sprint plan for the core Akka team

3 weeks
Start: 2017-04-18
End: 2017-05-05

Akka Typed

  • Implement things, focus on the core (both Java and Scala APIs)
  • Verify that interop with classic is working with some typical things like Cluster Sharding
  • Release some artifact (merge back to master, or a milestone)

Documentation

  • Use the new design and add content for akka.io site
  • Complete the Getting Started Guide by adding Java examples, and complete missing things
  • Quick start (Hello World, g8, maven archeytype)
  • Paradox Theme for the new design

Play / Akka-Http integration

  • Final verification of Play benchmarks with new materializer and other improvements
  • HTTP/2
    • Flag/API to enable it
    • Document it
    • Test and fix small things

Bugs and failures

Bug start count: 8
Failure start count: 41

Akka HTTP Bug start count: 37
Akka HTTP Failure start count: 12

Akka Sprint plan 2016-09-12

Sprint plan for the core Akka team

3 weeks
Start: 2016-09-12
End: 2016-09-30

Artery

  • Focus on final testing and hardening
  • The goal is to merge Artery to master and release it in a 2.4.x
  • Tickets for remaining work

HTTP/2 PoC

  • Time boxed PoC to estimate effort required to support HTTP/2
  • Might become a starting point for community to continue development

Akka Http

  • Move code, docs and issues to https://github.com/akka/akka-http
  • For the docs we need to create a conversion tool from rst to Paradox
    • Hopefully the community can help out with the manual adjustments of the documentation.
  • Meeting with the core community

Bugs and failures

Bug start count: 8
Failure start count: 29

Akka HTTP Bug start count: 14
Akka HTTP Failure start count: 4

We should also work on bugs in akka-stream-kafka, akka-persistence-cassandra

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Blog

  • Complete and publish blog post about streaming XML parser
  • Threading and Concurrency part 2

Releases

  • 2.4.x including Artery
  • Milestone release of akka-http from new repository

How shall we surface acknowledged Sinks in Akka Streams?

This is the much simpler cousin of #13. The primary proposal is to model this as a Flow:

class KafkaSink extends GraphStage[FlowShape[Message, Confirmation]]

This is fundamentally what happens: messages are sent to Kafka, to be turned into confirmations.

How shall we surface acknowledged Sources in Akka Streams?

These are some initial thoughts, exemplified on a hypothetical streaming Kafka connector API. Use-cases that need to be supported:

  • auto-commit (i.e. at-most-once delivery to the stream)
  • manual commit (i.e. at-least-once delivery to some stream section after which the commit is generated)
  • manual confirmed commit (i.e. at-least-once delivery to a section with at-most-once follow-up effects after the commit)

Option 1: Streams all the way down

Here we only use in-stream data elements, exclusively.

class KafkaSource extends
  GraphStage[FanOutShape2[Confirmation, Message, ConfirmationAck]]

The primary output is the Message, which has an associated Confirmation member that will eventually be fed back. After confirmation has successfully moved the cursor, a ConfirmationAck is produced.

Pros

  • streams only—once it is wired, it will work

Cons

  • complex wiring required, does not work without GraphDSL
  • slightly convoluted and rather unintuitive type signature for KafkaSource

Option 2: Using materialized values

Here we provide multiple independent pieces in order to allow usage in normal Flow syntax.

class KafkaSource extends
  GraphStageWithMaterializedValue[SourceShape[Message], KafkaReader]
class KafkaConfirmation extends
  GraphStageWithMaterializedValue[FlowShape[Message, Confirmation], KafkaConfirmer]
trait KafkaReader {
  def wireUp(c: KafkaConfirmer): Unit
}

Here the idea is that users can use the linear Source & Flow DSL as in:

KafkaSource().map(...).viaMat(KafkaReader())(_ wireUp _)

This only works well if the whole setup is created in a place that has this kind of overview. The KafkaReader will forward the confirmations to the KafkaSource using the GraphStage’s asyncInput facility, all Kafka interaction is owned by the KafkaSource.

Pros

  • allows usage of linear DSL while retaining reusable blueprint character (current reactive-kafka solution goes in this direction but requires pre-materialized Source and Flow that are exposed using RS interfaces).

Cons

  • forgetting to set up the wiring will be hard to detect in a timely/direct fashion
  • putting yet more weight on a potentially overused feature

Option 3: Using side-effects

While the correct message and confirmation types need to travel through streams to the right places in all solutions, this one encapsulates the concern of confirmation within the data elements themselves.

class KafkaSource extends GraphStage[SourceShape[Message]]
trait Message {
  def confirm(): Future[Done]
}

This allows usage as a bog-standard Source.

Pros

*easy to set up

Cons

  • data elements retain a reference to the Source and are not (de)serializable

Backpressure of confirmations is a potential problem which probably can be handled by conflating them (since a confirmation moves a cursor and therefore confirmations are cumulative).

Other API Options / Considerations

Factory Methods

Any of the above can be combined with factory methods that turn a Flow[Message, Confirmation] into a Source[ConfirmationAck], but this has several downsides:

  • it limits what can be expressed since only ConfirmationAck emerges from the overall stream
  • it violates the blueprint principle in the sense that Kafka pieces are not freely composable if offered in this fashion
  • it goes against our recommendation that libraries shall provide stream topologies, not consume them—this violation is only acceptable for framework-like usage as in Akka HTTP server

Adapters

It would be possible to turn a stream of confirmable messages (Option 3) into “Flow” that forwards immutable messages and wants a stream of confirmations back. This adapter would make it possible to get around the serialization limitations.

Fan-Out of confirmable messages

Broadcasting confirmable messages to multiple destinations is troublesome: which of them shall be routed back to the source in order to move the cursor? There is no general solution to this problem, we may just not do anything about it. If we do something, the drawback will be that the messages need to become mutable and thread-safe because the required confirmation count should match the fan-out factor.

Virtual actors/endpoints

I want to continue the talk about virtual endpoints (name invented by Roger Alsing, since virtual actors may be a missleading). The whole talk started at akkadotnet/akka.net#756 and I think, we all (both JVM and .NET akka developers) should talk about this.

In short: virtual actors (as called by Orleans and Orbit frameworks) are not bound to any concrete instance or resource. From logical perspective, they are eternal. Virtual actor doesn't have an explicitly managed lifecycle and in case of failure i.e. crash of the server executing it, it's simply respawned on another node. Since it always exists, it's always addressable and responsive.

cc @rkuhn @rogeralsing

Akka HTTP, free 4 all call

Dear hakkers,
we'll try something new for Akka HTTP, since it's trying to be very community driven we figured we should make some time to have a hangout with anyone who wants to chat about it.

Part of our core team here at Lightbend (myself and @jrudolph who are most involved in Akka HTTP) and hopefully many of the @akka/akka-http-team will be present.

The goal of this call is to briefly chat about next steps, what to hack on and medium term goals / roadmap. Things like the HTTP/2 work and how we'd like to move forward with integrating more with Play etc (as a backend).

Time: 26th October 2016, Wednesday; 17:00 CEST
The hangout is free for anyone to join, see you there!

Agenda:

  • go over immediate tickets that we should solve for 10.0.0
  • talk about HTTP/2 plans
  • doc improvements/roadmap
  • reiterate bin-compat/versioning decisions
  • remaining cleanup work after transition from akka/akka (?) -....
  • talk about topics great to pick up by community because we won't have time to
  • how to go about adding new people to https://github.com/orgs/akka/teams/akka-http-team
  • ???

/* This is an experiment, we have not been doing such things previously, and let's hope it'll prove to be useful. */

Akka Sprint Plan 2017-03-20

Sprint plan for the core Akka team

3 weeks
Start: 2017-03-20
End: 2017-04-07

Akka Typed

  • Review APIs and get an understanding of the implementation
  • Implement things, focus on the core (both Java and Scala APIs) and interop with classic

Documentation

  • Use the new design and add content for akka.io site
  • Paradox Theme for the new design, collaborate with Tools Team
  • Convert current rst to Paradox if the theme is good enough
  • Review and improve the Introduction documentation (the book style), including the tutorial example
  • Getting Started (quick start)

Play / Akka-Http integration

  • Benchmark the Play Framework use case with new materializer
  • Remove performance obstacles in play - akka-http integration in collaboration with play team
  • (HTTP/2 work is on hold because it's not realistic that we will have time for it this sprint)

Bugs and failures

Bug start count: 10
Failure start count: 46

Akka HTTP Bug start count: 35
Akka HTTP Failure start count: 12

Blog

  • Not realistic that we will write anything

Other

  • team f2f meeting
  • catch up with Alpakka PRs

Akka HTTP - stable, growing and tons of opportunity

Dear hakkers,

After the recent Akka HTTP performance improvement releases, it is time to prepare for the Stable release of Akka HTTP.

We have given this process a lot of thought over the last months, including reaching out personally to many of its significant contributors, as well as considering the long term growth plans of the Akka ecosystem.

In tandem with the upcoming announcement of Akka HTTP becoming a fully stable maintained module, we want to include the following changes.

You, the extended Akka HTTP team

Akka has historically been very strictly owned and driven by the core team, here at Lightbend. With this move, we would like to make an experiment and see if our awesome community is able and interested in taking more ownership and direction-shaping of such an important project as Akka HTTP.

Lightbend and Akka team of course remain heavily invested and interested in Akka HTTP.

We believe it is an important building block for both Akka and reactive applications which like the toolkit (i.e. not-framework) design style that Akka maintains.

Akka HTTP remains supported by Lightbend and we will actively participate in its development. In the short term we promised a few weeks of multiple people working on HTTP/2 as well as final performance improvements. In the long term, we will keep maintaining, reviewing, and working on it, however unlike in the last 2 years we will not exclusively focus only on Akka Streams and HTTP – we have new exciting thing we want to bring to you which you’ll learn about very soon (one of them being the Remoting rewrite, Artery).

Action: This is simply to state, there is a lot of work to be done, and plenty of awesome developers out there that we know are able to pull it off, so we’d like to recognise those efforts by awarding a place for the Akka HTTP team in the github organisation.

Highly Open Participation

As a general rule, we’ll even more strongly than ever before, focus on a highly open and transparent way to participate and shape the future of Akka HTTP.

We would like to nominate a number of people–whom we highly value for their past and current participation in both the Spray as well as Akka HTTP project over the last year–to form the seed of the Akka HTTP team on github. This team is specifically not-only-lightbend employees, and it’s aim is to allow various people with a strong commitment in Akka HTTP to help shape its future and be recognised for it in more than just release notes.

The people whom we’d like to invite to form the initial Akka HTTP team are:

  • @2Beaucoup – as recognition of his continued involvement in the project since it’s Spray days,
  • @jypma – for his continued efforts on improving and re-inventing the Directives JavaDSL,
  • @jrudolph – of Spray and Akka HTTP fame, who will be joining us at Lightbend’s Akka team,
  • @sirthias and the existing Akka team.

The team will have merge rights in the project, so will be able to help the community get more pull requests reviewed and merged more quickly.

We initially don’t see the need to split the gitter channel or mailing list, those shall remain gitter.im/akka/dev for developing akka and gitter.im/akka/akka and akka-user for questions about using Akka.

Question: We are also thinking about scheduling a bi-weekly hangout, that would be open to anyone to attend in which we’d talk about the direction and important things to work on next. Let us know if this sounds useful and we’ll figure something out.

It would be possibly a good opportunity to improve the collaboration in the Akka HTTP team by growing our bonds through more than just typed text and tickets.

A new Versioning scheme for Akka HTTP

Binary compatibility is very important to us. We’ve shown one of the best track records in the Scala ecosystem on maintaining such compatibility–including efforts that we spent between 2.3.x and 2.4.x that made these binary compatible, which would have–under the previous versioning scheme–been major releases.

Please note that Akka, since the release of 2.4.0 maintains a MAJOR.MINOR.PATCH versioning, as opposed to the previously used EPOCH.MAJOR.MINOR which used to be Scala versioning scheme inspired.

We can not use the 1.0 version to demarcate this new start as it would be very confusing since that number was already used back in July 2015 while we were working on Akka Streams & HTTP under a separate version scheme than Akka "core", which was at 2.3.x and later on continued to become 2.4.x.

Our goal with the HTTP versioning scheme is to signify that while Akka HTTP may need to bump its major version number, e.g. from 10 to 11, it would be still compatible with Akka 2.x–the stable core which will only break compatibility when a huge change is needed.

Action: After careful consideration of all of the above constraints, we propose that the new versioning scheme be different enough not be confused with Akka "core" versions. Specifically, we think using a version number much higher than 2 or 3 (which would be confusable with Akka versions) is the way to go, and suggest to start at 10.0, simultaneously announcing Akka HTTP to be stable (removing the experimental flag).

To decide: Given troublesome situation of the Directives trait being extended by user code, binary compatibility is impossible to maintain when a new directive is added into it. This will change with Scala 2.12, however with 2.11 it still is the case. We propose to keep source compatibility in minor releases, e.g. a new directive can be added but to a new trait that one mixes-in explicitly, and in a major release we can merge those new traits to the existing Directives structure. A major release may then still happen sooner than an Akka 3.0 (which is very far out), and the project can keep a healthy speed - to be determined by the team.

Split documentation

Akka always was and still is well known for it’s excellent documentation and we intend for it to not only stay this way, but become even better!

Our largest challenge currently with this came with the growth of documentation pages, especially with the addition of Akka HTTP and it’s more than 300 directives (all directives and pages available in both Scala and Java), it is becoming more difficult to read / browse the entire docs together.

Therefore, we would like to split up documentation in different module sub-directories. For example, specifically this means that we’ll move all of Akka HTTP docs to a root-level "http" directory on doc.akka.io.

Additionally we want to move the documentation to Lightbend Paradox a documentation engine. It is much simpler to use than Sphinx and ReStructured Text which Akka is using currently because it is plain Markdown. It will shortly gain fine-tuned Scala documentation features and and can even include executable Akka.js snippets using ScalaFiddle. You can get a glimpse of how documentation with embedded runnable Akka Streams snippets might look like on this example page prepared by Andrea Peruffo (from the Akka.js team).

As you know, we included Algolia powered search in our docs so it’s now much easier to find the right thing in the docs. Often however, you are searching for things in context–i.e. "I’m looking to do X in Akka HTTP" or specifically looking how to do something with Reactive Kafka. Often similar search queries, and confining the search to a given module will yield in a better search experience.

Action: Docs will be hosted under doc.akka.io/ as is currently, however will be moved to their own top-level directory, e.g. doc.akka.io/http/10.0/. We will attempt (and need help doing it, there’s a ton of documentation) to move from Sphinx to Paradox, which should be much simpler to contribute docs with as it’s plain markdown.

Exciting and upcoming: HTTP/2

As briefly mentioned in release notes over the last months: yes, we will work on an HTTP/2 proof-of-concept for Akka HTTP. It is important to realise that unlike for example Play which relies on Netty for implementing the HTTP layer and can simply upgrade netty to get this capability, Akka HTTP is an entire web server, thus the responsibility of implementing all low level bits of the HTTP/2 protocol fall into the hands of akka-http-core.

Action: We will spend a number of weeks developing an HTTP/2 PoC, however are not sure how much features we’ll be able to implement during that time. When we start the HTTP/2 sprint, we’ll invite you, the interested parties in this feature to review, help, and contribute to get this the best possible support. In other words, with the project becoming more and more community led, we would like to encourage a "vote with code" approach–if a feature is important, it should be able to attract contributions to see it shipped.

Summing up

We believe this move is both important and exciting. It will allow to grow the community not only sheer size but also strength and go where no man has gone before. Feel free to comment in this meta thread if you have any thoughts on the topic.

More technical details which we need to decide before calling the module stable will be debated on the github.com/akka/akka-http repository. For example stating and explicit source compatibility requirement or perhaps deprecating the "trait-style" way of using directives to allow us to keep binary compatibility in much more situations.

Again, we’d like to stress that we want to hear and see both feedback and code from you all.

Let’s get hakking!

Clarify commit msg format

https://github.com/akka/akka/blob/master/CONTRIBUTING.md#creating-commits-and-writing-commit-messages currently just states:

2. First line should be a descriptive sentence what the commit is doing, including
   the ticket number.

When skimming through the commit msgs in the akka and akka-http repos it stands out that they don't adhere to a consistent format.

I suggest we stick to one of the following

  • <[+-!=]><module-id> #<ticket> <msg>
  • <msg>, #<ticket>
  • <msg> (#<ticket>)

and update the contributing guidelines accordingly. (I guess for akka-http it makes sense to just link the akka one.)

Formal descriptors for HOCON configuration

The idea/proposal is to create a common descriptor format, which could be used to define a shape of the HOCON configuration. This way we could support things like IntelliSense, compile-time verification of any typo's or invalid types of the key/value pairs defined by end users.

Akka Sprint Plan 2017-05-08

Sprint plan for the core Akka team

3 weeks
Start: 2017-05-08
End: 2017-05-26

Documentation

  • Release the new design and content for akka.io site
  • Release Getting Started Guide
  • Release Quick start (Hello World, g8, maven archeytype)
  • Convert reference docs to Paradox

Investigate multi data center solutions

  • define use cases, patterns
  • describe and prototype solutions

Other

  • follow up on Play http/http2 issues
  • write and publish more Akka Typed blog posts
  • prepare scalability tests

Bugs and failures

Bug start count: 7
Failure start count: 39

Akka HTTP Bug start count: 36
Akka HTTP Failure start count: 15

Akka Sprint plan 2017-01-02

Sprint plan for the core Akka team

4 weeks
Start: 2017-01-02
End: 2017-01-27

HTTP/2

See t:http2

Akka 2.5

  • Distributed Data
    • Promote to non-experimental
    • Ensure API can support delta-CRDT
  • Revisit Java8 API for actors
    • AbstractActor is still experimental
  • Switch on remote mutual TLS authentication setting by default

New Getting Started Guide

  • Write the remaining chapters

Bugs and failures

Bug start count: 9
Failure start count: 42

Akka HTTP Bug start count: 42
Akka HTTP Failure start count: 9

Other

  • Work with designer for akka.io website work
  • Complete the almost done things:
    • Traversal oriented layout
    • Artery compression ownership
    • Coordinated shutdown
    • Publish samples

Releases

  • akka 2.5-M1
  • alpakka, akka-http "on demand" if we accumulate enough to release

Zip | Unzip functionality with N items.

Zip and unzip taking just 2 items is a bit limiting. Perhaps you can support N items like ZipWith. More than Zip (as there is ZipWith) this is needed for UpZip

Akka Sprint plan 2016-10-31

Sprint plan for the core Akka team

3 weeks
Start: 2016-10-31
End: 2016-11-18

Lagom

  • Complete Scala API for persistence and serialization
    • Mostly small FIXME and translation of documentation remaining

HTTP

  • HTTP/2

    • Basics in place so that it would be possible for community to continue some parts
  • akka-http 10.0.0

    • Make docs navigation somewhat useful

Alpakka

  • Release 0.1
  • Announce and contact interesting existing projects
  • Define list of essential integrations
  • Evaluate Reactive Sockets as an integration target

Documentation & web site

  • Start writing the new getting started documentation
  • Scaladex community listing
    • We can just link to scaladex for the listing

Bugs and failures

Bug start count: 7
Failure start count: 32

Akka HTTP Bug start count: 30
Akka HTTP Failure start count: 12

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Blog

  • Overview of Artery features and internals
  • Threading and Concurrency part 2

Other

  • Try Cinnamon 2.1.0
    • One dedicated day, full team, 14/11
  • Work with designer for akka.io website work

Releases

  • akka-http 10.0.0
  • alpakka 0.1

coexistence of typed and untyped actors

The purpose of this issue is to illuminate to which degree we shall support typed and untyped actors to coexist and collaborate—within the same ActorSystem as well as across systems (in-JVM; remoting would probably be a different topic).

The base assumption is that untyped actors will not go away anytime soon.

Another assumption I’m making is that the internals of remote & cluster will be reformulated and included in Akka Typed eventually. APIs for Persistence are not yet considered, but for streams it should be enough to add some new sinks and sources plus a new Materializer.

So, what we should discuss here is which operations shall be supported between typed and untyped actors on the different ActorSystem implementations.

Typo in Orleans and Akka Actors: A Comparison

The Summary and Interpretation says:
"Orleans offers a programming model ... This is achieved by making a set of implementation choices—like at-least-once delivery which is based upon ...."
This is based on the old understanding of Orleans messaging guarantees. As was clarified later on, Orleans provides at most once and not at least once messaging guarantees.

Akka Sprint plan 2016-06-20

Sprint plan for the core Akka team

3 weeks
Start: 2016-06-20
End: 2016-07-08

Artery

The focus should be on hardening.

  • complete and merge compression table PR
  • add counters to flight recorder, use in compression table
  • use flight recorder to write deep tests (e.g. verify order and sequence of messages)
  • make tests pass, enable cluster tests
  • flush on shutdown, might be needed for tests to pass
  • benchmark and improve performance

Kafka Connector

  • Review Alexey's PR
  • Review PR that is adding more integration tests
  • More contributions are welcome
    • better test coverage, some work is in progress by contributor
    • performance tests
    • documentation
  • We will have one Sprint in August to push the 0.11 version over the finish line.

Bugs and failures

Bug start count: 29
Failure start count: 33

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Community

Several things to grow community involvement. Stay tuned.

Releases

  • 2.4.x on demand
  • Artery milestone at the end of the sprint

Akka Typed: How to break all the things?

Once the scope is decided in #18 we need to start the implementation, and there I can see two possible paths:

  1. start out from what we have in akka-actor, copy it over, and start modifying it (beginning with the removal of the Envelope :-) )
  2. start from scratch, write a completely new ActorSystemImpl, in part inspired by what we have in akka-actor

The second one sounds attractive because it is easier to avoid adding something that is obsolete, but I fear that this approach is also too much waterfall, just like Akka Streams aimed to do all at once (yes, I appreciate that the situation was not exactly the same).

I am currently leaning towards the first approach because it will allow us to make incremental improvements while keeping the tests green. And the system remains shippable at all times—a brave move would be do actually do this on the master branch, but that means being absolutely clear that features are going away over time … this is unusual, if not unorthodox. But Akka has always done things a little differently.

Also in this case @akka/akka-team @michaelpnash @hseeberger and all others please comment so that we can decide on a way forward.

Akka Typed: How much Breakage is Desired?

akka-actor has accrued many features over the past seven years, and in many ways akka-typed offers the opportunity to start from the ground up: different package names and APIs make a direct automatic translation undesirable anyway, so adaptations and thinking will be required of the users. We should therefore aim to create exactly what we want future users to see, unencumbered by what past and current users have seen so far—akka-actor will not go away anytime soon, so not much harm is done.

With that said, here is a list of features that I think deserve critical justification (and absent that, removal):

  • reflective creation of mailboxes and dispatchers: there should be just 2 implementations each (bounded/unbounded and CPU/IO)
  • Routers: mixing concurrent code in the send path with an Actor identity is so troublesome that I don’t think it pulls its weight—we should only offer actor-based routing and non-actor-backed ActorRefs that handle their own thread-safety
  • deployment scope: in particular its only use—remote deployment—should go away and be replaced by normal messages that ask a remote service to create an actor
  • generic scheduling of Runnables (i.e. retain only deferred message sending, offered on the ActorContext)
  • Extensions: they should be Actors that are created under /system and accessible via ActorRefs—with their creation being async
  • Failure signals: including the Throwable means asking the supervisor to know more about its children than is healthy—a failure is by definition unforeseen and should be acted upon without knowledge of the cause (this is inspired from Ponylang); this change would allow us to offer comprehensive built-in tools for failure handling because only the timely aspects are relevant (i.e. if and when to restart)

Please @akka/akka-team @michaelpnash and everyone else: let us settle on a plan and then execute it.

This ticket is about the scope only, the approach shall be discussed in the next ticket.

Akka HTTP call for all

Hi hakkers,
we'd like to have another free-for-all Akka HTTP call.

Anyone who's interested in developing Akka HTTP is welcome to join.

Time:

Johannes knows everything there is to know about plans and codebase etc, so I don't think there's a need to delay it just because I'm away - we wanted to hold this call since January but travel made it too tricky.

Topics:

  • Lightbend team update, what we'll focus on, and roadmap etc
    • We'll talk about HTTP/2 plans and timing, we'll work with Play team more closely together soon to get Akka HTTP as the engine in there - Konrad or Johannes
  • getting started / docs improvements, how to go about it - Josep
  • declarative parsing / streaming - Jan
  • simple high-level client infrastructure - gosubpl
  • [your topic here, please propose below]

Let us know what time and topics would work for you!
The call will be on google hangouts and free to join for anyone who wants to.

Semantic Logging in Akka Core

The current logging infrastructure in Akka is based on logging strings.
Which works fine for oldschool loggers like NLog (on .NET that is)

But there are a lot of valuable information in those logs that is hard to get from a pure textual log.
For example, lets say you want to measure frequency of heartbeats, or how often some system is gated etc.
You would have to parse text output for that.

(I know you are also working on monitoring support which might make some of these things easier)

But if the logging infrastructure inside Akka core modules used semantic logging. we could extract much more useful information about the system.

e.g. Akka logs -> elastic search -> kibana dashboard.

I think that would be a very nice addition to any type of monitoring support.

We on the .NET side do support semantic logging via Serilog as a logging adapter. however, that requires other type of formats/templates for your actual log messages.
So it is absolutely possible to use semantic logging in user code of Akka.NET.

But inside the core modules, we still have the standard string formatted messages + LogEvent derivates that simply outputs its content via ToString()

Thoughts on this?

Consider replacing custom Actor based logging with SLF4J

Currently, Akka has a custom logging framework based on Actor messages which has its own logging API which looks very similar to SLF4J.

This can be bridged to SLF4J / Logback as described in the docs: http://doc.akka.io/docs/akka/snapshot/java/logging.html#SLF4J

This reimplementation of SLF4J causes a few problems:

  • API confusion and edge case bugs arising from the API being not-quite the standard, detailed at akka/akka#16745
  • Edge case bugs arising from trying to log info about an ActorSystem while it is shutting down using Actors that belong to that system, detailed at akka/akka#17010
    • (a workaround is already in place to log messages that would otherwise be lost to this bug to stdout)
  • Extra complexity in the Akka codebase that seems irrelevant to Akka's goals

I think the best fix would be to remove the Actor-based logging and replace it with direct calls to Logback / SLF4J, with Logback configured to be non-blocking (Logback has solid support for async logging, see akka docs). This would solve the bugs linked above and reduce Akka complexity (there's no benefit to having Akka declare its own set of LogLevel enums when SLF4J already does so). See the comments on akka/akka#16745.

I think https://xkcd.com/927/ applies -- Akka doesn't need it's own logging API; it should use SLF4J.

Of course this would be a massive breaking change :|

Thoughts? One for Akka 3 / Akka 2.next? One to write off as "that ship has sailed, sorry"?

Make ActorRefs unforgeable

This basically means adding a long-enough, cryptographically random piece of data at the end of an ActorRef (practically, make the UID longer and safer).

There are additional changes required to make this useful in practice:

  • remove the old Identify feature so it is impossible to look up actors by name anymore, without knowing the secret part of the ActorRef. Receptionists/Service lookup actors should be used instead. The benefit is that now it can be tightly controlled which actors can access which other actors
  • make it possible to create cheap proxy actors that can filter messages passed to their target. The benefit is that this way actors can hand out proxies to its own references with a reduced service, and for example hand it out remotely. Since the original ActorRef cannot be looked up, nor can be forged, this is a good building block for access control. Recipients of the "reduced-service" ActorRef can pass it around as a capability possibly filtering or transforming along the way to sub-services.

Akka Sprint Plan 2017-02-27

Sprint plan for the core Akka team

3 weeks
Start: 2017-02-27
End: 2017-03-17

Complete the New Materializer

  • All in the team should understand the new materializer by helping out with tasks to complete it
  • Make it work and fast

Akka Typed

  • Yeah, we are finally starting!
  • Get up to speed with what has been done so far
  • Review APIs and get an understanding of the implementation

Documentation

  • Use the new design and add content for akka.io site
  • Paradox Theme for the new design, collaborate with Tools Team

Play / Akka-Http integration

  • Benchmark the Play Framework use case with new materializer
  • Remove performance obstacles in play - akka-http integration in collaboration with play team
  • Complete HTTP/2 things based on feedback from Play usage.

Bugs and failures

Bug start count: 10
Failure start count: 44

Akka HTTP Bug start count: 41
Akka HTTP Failure start count: 11

Blog

  • Konrad: compression in Artery
  • Patrik: delta-CRDT and the protocol

Releases

  • Akka 2.5.0-RC1

Akka Sprint plan 2016-03-10

Sprint plan for the core Akka team

3 weeks
Start: 2016-03-10
End: 2016-04-01

Lots of holidays, vacations, and consulting

Bugs

Start count: 49
Goal: fix 15 issues, release Akka 2.4.3 and 2.3.15

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first week, and Tuesday, Wednesday the second week). Customer reported bugs may of course change this.

Backport stream/http issues to 2.0.x

Failures

Goal: no test failures that recur every day

Kafka Connector

Goal: below tasks and first release of akka-reactive-kafka

Tasks:

  • import existing source to akka/akka-reactive-kafka
  • configure build, scalariform etc
  • pr validation, travis, cla validation
  • readme, documentation
  • ack protocol
  • v0.8 vs v0.9?
  • api, Akka Streams (Java/Scala)
  • embedded Kafka + Zookeeper for tests
  • bugs?
  • release

Cassandra Connector

  • define scope and high level design
  • establish contacts
  • we probably need CDC CASSANDRA-8844

Akka Streams

  • fixing materialized value computation:
    • currently hitting recursion limits too soon
    • interaction with fusing is con-fusing

Akka Http

  • Mentor Java DSL and other contributions?
  • Lots of people trying it / testing, finding problems – need to address.
  • half-closed websocket madness – ticketify and work

akka-persistence-cassandra

  • eliminate blocking in initialization, stretch goal

Knowledge sharing

  • Akka typed
  • Stream fusing
  • LARS

Other

  • scalariform update

Releases

  • 2.4.3 and update RP 16s01 tag
  • 2.3.15 and update RP 15v01/15v09 tag
  • streams 2.0.4
  • first release of akka-reactive-kafka

Akka Sprint plan 2016-05-23

Sprint plan for the core Akka team

3 weeks
Start: 2016-05-23
End: 2016-06-10

Artery

The focus should be on hardening failure scenarios and optimize performance. Not everything will be completely done but we will iterate.

  • basic flight recorder
  • blackhole in remote testkit, and other features that we need for writing tests
  • failure scenarios
    • port existing tests, improve them, and write new tests
    • review old implementation and make sure that we cover all failure scenarios
    • race conditions related to system messages and incarnations of systems
      • perhaps include UIDs in each system message envelope and ack so that the intended destination/origin is explicit?
    • stress tests, randomized consistency tests
  • complete compression table
  • serialization api based on byte buffers
  • benchmark and improve performance
    • setup EC2 servers
  • send heartbeat messages over control stream
  • origin address issue
  • how to use deadLetters?
    • system messages?
    • not all dropped messages will go to deadLetters
  • extract configuration
    • high level settings => low level settings
    • advanced section
  • meta data section in header, for monitoring
  • encryption prototype, investigation

Kafka Connector

  • We still would like community help with
    • add more comprehensive tests
    • add basic performance tests
  • Support Lagom cross-service pub-sub

Akka Http

  • Guide contributors

Akka Streams

  • Continue to mentor community contributions

Akka Typed

  • Coordinate and be involved in discussions

Bugs and failures

Bug start count: 27
Failure start count: 28

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Test failures

  • Start count: 35
  • Goal: no repeating, decreasing number

Other

  • triage issues, 151 not triaged, 43 marked New

Releases

  • 2.4.x on demand
  • Artery milestone at the end of the sprint

Akka Sprint plan 2016-08-22

Sprint plan for the core Akka team

3 weeks
Start: 2016-08-22
End: 2016-09-09

Artery

  • The goal is to merge Artery to master and release it in a 2.4.x by end of September
  • Tickets have been created for remaining work
    • community help is of course welcome

Reactive Kafka

  • Release candidate is out. Fix any critical issues and release final 0.11.
  • Announcement

Akka Http

  • Move code, docs and issues to https://github.com/akka/akka-http
  • For the docs we need to create a conversion tool from rst to Paradox
    • Hopefully the community can help out with the manual adjustments of the documentation.
  • Meeting with the core community

Akka Streams

  • Complete the Hub
    • documentation
    • javadsl
  • Complete the materialization improvements that were started
    • Http performance for short lived connections should improve from this

Bugs and failures

Bug start count: 28
Failure start count: 36

We should also fix some of the issues reported in akka-persistence-cassandra

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Other

Releases

  • 2.4-ARTERY-M4
  • akka-stream-kafka 0.11

Misleading approach to actor initialization

PreStart signal has been removed from it and it was suggested to use Actor.deferred instead
But it can be misleading for the newcomers.

Here is the example, lets say I want to schedule a message to ctx.self from an actor
This is the correct approach

sealed trait MyProtocol
case object StartWork extends MyProtocol

def prepare(): Behavior[MyProtocol] = Actor.deferred[MyProtocol]{ ctx =>
    ctx.schedule(1.second, ctx.self, StartWork)
    init()
}

def init(): Behavior[MyProtocol] = Actor.immutable[MyProtocol] { (ctx, msg) =>
    msg match {
        case StartWork =>
            // Message was sent
            Actor.same
    }
}

But my initial incorrect approach was

sealed trait MyProtocol
case object StartWork extends MyProtocol

def init(): Behavior[MyProtocol] = Actor.immutable[MyProtocol] { (ctx, msg) =>
    ctx.schedule(1.second, ctx.self, StartWork)

    msg match {
        case StartWork =>
            // Was NOT triggered
            Actor.same
    }
}

I guess my problem with it that it is really easy to make a mistake, because It looks like I have some space for initialization logic after (ctx, msg) => part

Also might be not clear that ctx.self inside of deferred is the same as in underlying immutable block

P.S. I know that there was already a lot of talks related to that. But I just sharing my feedback and the problem that many can potentially face with this version of API

Akka Sprint plan 2016-10-11

Sprint plan for the core Akka team

3 weeks
Start: 2016-10-11
End: 2016-10-28

Security

  • Security Policy, setup mailing lists and such
  • Mutual authentication for Akka Remoting (Netty)
    • Likely: Expose context so people can set stuff on SSLContext
    • lots of information in this ticket: akka/akka#13874
  • Document how to setup TLS for Akka Remoting (Netty)
  • Whitelist support for remote deployment
  • Security audit logging, support for markers (slf4j, those for alerts)
  • Try ghostunnel and document if useful

Lagom

  • Scala API for persistence and serialization

Alpakka

  • Create repository
  • Documentation skeleton (Paradox)
  • Define list of essential integrations
  • Announce and contact interesting existing projects
  • Document requirements for contributing into this repo (java/docs required etc)

Documentation

  • Convert to Paradox
  • Draft of new outline

Bugs and failures

Bug start count: 11
Failure start count: 28

Akka HTTP Bug start count: 23
Akka HTTP Failure start count: 10

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Blog

  • Emit and friends
  • Threading and Concurrency part 2

Other

  • Follow up on survey (t-shirts)
  • Find designer for akka.io website work

Releases

  • Milestone/RC release of akka-http
  • 2.3.16 (probably the last 2.3.x release)

Akka Sprint plan 2016-04-28

Sprint plan for the core Akka team

3 weeks
Start: 2016-04-28
End: 2016-05-20

Artery

We will not be able to complete everything in this sprint, obviously, but we should start real implementation touching most of the important parts, such as:

  • Sink/Source vs Flow
  • Handshake, UID, Quarantine
  • System message delivery
  • Sub-channels
  • Serialization and buffer management
  • Error handling
  • Lockless association registry
  • Envelope format
  • Compression table
  • Encryption prototype, investigation
  • Flight recorder

We will publish it as some kind of M1 If the implementation is good enough for others to try out.

Kafka Connector

  • Continue work on the Kafka connector based on feedback and known outstanding tasks, such as Java API, fixing bugs and more comprehensive tests. Hopefully the community will be involved and help us with this.
  • Support Lagom when cross-service pub-sub is started

Akka Http

  • Make Java DSL mergeable and releaseable
    • Document directives (not code samples for now to get to release)
    • Migration guide (not step by step, because that is not possible)
    • Fix 1 failing test
    • Custom headers
    • Test coverage for scary cases, basically every directive with // TODO
    • Extra: Sealing is not configurable in new JavaDSL
  • Document workaround for half-closed websocket
  • Triaging Http issues
    • Pair triaging, Konrad and Johan

Akka Streams

  • Continue to mentor community contributions, including rewrite of remaining Akka Streams internals that are not using GraphStage (groupBy)
  • Create Streams Misc repository

Bugs

  • Start count: 28
  • Goal: fix ~5

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Test failures

  • Start count: 35
  • Goal: no repeating, decreasing number

Knowledge sharing

  • Make sure all are involved in Artery development, at least PR reviews

Other

  • triage issues

Releases

  • 2.4.5 as soon as the Http Java DSL is merged
  • on demand

Out of Scope

Things we considered, but didn’t make it in this sprint.

  • Optimize non-persistent http connections
  • Optimize materialization speed
    • might be another approach to solving non-persistent http connections
    • enabler for melding
  • The Hub
  • GraphStage TLS
  • half-closed websocket madness (#19957)
  • Akka website?

Akka Sprint Plan 2017-02-06

Sprint plan for the core Akka team

3 weeks
Start: 2017-02-06
End: 2017-02-24

HTTP/2

Complete things based on feedback from Play usage.

See t:http2

Akka 2.5

  • Complete remaining tickets
  • Distributed Data
    • Implement delta-CRDT for more data types

Materialization Optimization

Bugs and failures

Bug start count: 10
Failure start count: 46

Akka HTTP Bug start count: 43
Akka HTTP Failure start count: 10

Other

  • Tooling to improve productivity
  • Scala 2.12 testing
    • make sure that the jenkins job is testing with 2.12
    • fix overload issue in AbstractPersistentActor
    • benchmark FJ pool issue (not only ping-pong)
  • Work with designer for akka.io website work
  • Remove old samples and publish new samples
  • Write blog posts
  • New Getting Started Guide on hold until materialization work is completed.

HOCON Strings

This is a crosspost from the developer list:

According to the HOCON Spec, and JSON Spec, only double quotes " are supported for quoted strings.
We on the .NET side of things have seen this beeing brought up a few times now.
That is, people beleive that single quotes are allowed.
e.g

loggers = ['F.Q.N, Assembly']

Which is parsed as: 'F.Q.N,
As it is treated as an unquoted hocon string which ends after the ","

Does the JVM HOCON Parser allow this even though it is not mentioned in the spec?

And if not, would it be OK to add support for this?

I do realize that this would be a breaking change if people already use single quotes as chars in an unquoted hocon string.
But not treating single quotes as strings seems to be confusing for most people. at least our users :)

Akka Sprint plan 2016-11-28

Sprint plan for the core Akka team

4 weeks
Start: 2016-11-28
End: 2016-12-22

Akka 2.5

  • Coordinated shutdown
    • e.g. to support better cluster Leaving with coordination with Cluster Singleton and Cluster Sharding
  • Distributed Data
    • Promote to non-experimental
    • Ensure API can support delta-CRDT
  • Revisit Java8 API for actors
    • AbstractActor is still experimental
  • Deprecate, remove things
  • Artery
    • Make parallel lanes performant
    • Move compression ownership into Decoder
    • AFR
      • More events
      • Enable by default (naming of files)
  • Switch on remote mutual TLS authentication setting by default

Streams

  • Materializer
    • Traversal oriented layout, see PR 21057
    • Performance improvements

Samples

Bugs and failures

Bug start count: 8
Failure start count: 28

Akka HTTP Bug start count: 35
Akka HTTP Failure start count: 8

Blog

  • Overview of Artery features and internals
  • Threading and Concurrency part 2

Other

  • Work with designer for akka.io website work
  • ActorSystemSettings
  • Improve Build
    • PR validation build times
    • Build that tests akka-http with latest akka

Releases

  • akka 2.5-M1
  • alpakka, akka-http "on demand" if we accumulate enough to release

How to compose akka-typed actors with akka-persistence?

Right now we are able to define actor as a single function behavior - using akka-typed (JVM) or F# API (.NET). However this doesn't suit well with existing persistent actor design, for various reasons, mostly concerning actor's state. Few issues comes to my mind:

  • State could be potentially changed asynchronously by persist method's callback, which makes storing it as a behavior function parameter difficult.
  • The only logical alternative seems to be making dedicated Persist effect, but then we loose the ability to change behavior when returning from function.
  • Separate receive methods for events and commands also won't make things easier.

So my question to Typesafe guys - what are your thoughts about possible integration of akka typed and persistence in the future?

Akka Typed: Simplified Actor Lifecycle

While implementing the new akka.typed.internal.ActorCell based on akka.actor.ActorCell I wondered about the complexity of the internal mechanics in relation to the user-visible feature set. A significant part of the cognitive load for understanding the implementation is caused by the fact that the Actor and all its sub-hierarchy is suspended while waiting for a failure verdict from the supervisor. When that verdict comes back, we either Resume/Stop (the simple cases) or Restart (the most complex case). Restarting has the goal and benefit of keeping the mailbox around and the ActorRef stable across the act of handling a failure. Resuming has some built-in complications due to the ability to fail during actor creation and recover from it. Stopping is rather straight-forward since it is a one-way street with very simple semantics.

Proposal 1 (mostly a mental stepping stone)

Remove the ability to recover from failures during creation: instead of escalating an ActorInitializationException the actor terminates unconditionally. The supervisor should be informed about this abnormal termination by way of a FailedTerminated signal that does not allow any built-in reaction, a retry would have to be done in the logic handling this signal.

This would simplify internal book-keeping somewhat and remove some complex code paths that are needed for getting the right information into the right places (so that Resume can be turned into Create for example).

Proposal 2

A more radical proposal occurred to me when contemplating where the complexity of the ActorCell stems from: it is complicated both by hierarchical supervision and by the ability to restart with a stable ActorRef. Both of these deviations from the Actor Model (and from other implementations) are desirable, I am not questioning their existence. But with the new way of composing behaviors in Akka Typed I do think that we can do something about the complexity.

My proposal is to remove the suspension logic and asynchronous restartability from the ActorCell. Restarting can be modeled more efficiently by a behavior decorator that decides and recreates synchronously.

The consequence would be that only termination is signaled to the supervisor, including the information about whether it was of normal or abnormal origin. This will simplify the ActorCell tremendously, taking away the multitude of suspension races that define is current design. The suspension counter would be turned into an isTerminating flag that inhibits the processing of messages while waiting for the sub-hierarchy to shut down.

But what about fault-handling delegation?

The most important notion behind hierarchical supervision is that the handling of failures is delegated to a supervisor instead of burdening it onto the client (as is done by virtually every other framework). We modeled this very directly by message-passing since version 2.0-M1 for a very simple reason: remote deployment.

As discussed in #18 this feature should be removed, opening up other possibilities. The supervisor is of course free to enrich any child actor it creates with a behavior decorator that catches exceptions and reacts appropriately (e.g. by using the nested behavior factory to perform a full restart of the actor). If the supervisor wants to keep track of such restarts, normal messages can be sent from the decorator as notifications—since the primary responsibility for keeping the child actor running now lies with the child’s wrapped behavior, these notifications can be delivered by default using at-most-once semantics without sacrificing safety or liveness.

Another consideration is that composed behaviors within a single actor can also make use of the behavior decorators for specialized failure handling.

Interaction with DeathWatch

Non-parents who watch an actor should only be notified of that actor’s termination, regardless of the reason. Parents on the other hand should have the ability to react to abnormal termination differently than to normal termination. This raises the question of how to expose this difference in a way that is consistent with general DeathWatch.

One way would be to create a new feature that does not interact, meaning that spawning a child actor would generate parent notifications and watching that child actor would in addition generated the Terminated signal. The question would then be at which point it should be allowed to recreate a child actor with the same name as the failed one.

Another way would be to add a flag to the Terminated signal that would only ever be true within a supervisor after its child actor has failed. Reusing the signal would mean that failures are only communicated if the child actor has been watched—just like for normal termination. This would also play nicely with the DeathPact logic in that a one-off actor that is created without caring about its result will also not require any code to avoid the escalation of failures.

Other Benefits

One of the more troublesome questions with hierarchical failure handling is that with Akka Actor it feels a bit like we just moved the catch-block into the supervisor, including the burden of having to know about the child actor’s failure modes. The point of the let-it-crash pattern is precisely to avoid this kind of coupling, it should be enough to have the supervisor realize that an action is necessary without offering further details. The proposed change would express this shift of mindset quite nicely, separating the handled failures clearly from the unhandled ones.

More technically, this change would remove the horrendous hack of Failed.decide(verdict), allowing the notification to become immutable once again.

And the biggest one: SupervisorHierarchySpec will finally become understandable to mere mortals (including my current self).

The Plan

This whole discussion and proposal is of course isolated to the new implementation of Akka Typed, it has no bearing on the untyped actor implementation. One thing it does affect, though, is the ability of mixing typed and untyped actors within the same ActorSystem: it will likely turn out to be impractical to implement this interoperability feature, which would mean that systems migrating from untyped to typed mode will have two ActorSystems—sending messages from one to the other is of course still possible.

@akka/akka-team What do you think?

Non-monotonic, incremental data-flow

I was asked to open a ticket here after a discussion I started at akka-user mailinglist. I take the liberty of further expanding on some (admittedly very ambitious) ideas I have been playing with, that in my view align well with Akka.

I have been looking into a model of computation that exposes what can be efficiently computed on (hybrid/pure) P2P infrastructure and that provides syntax and concepts to do so. The API offered by Spark comes close, but lacks for me in several regards:

  • Neither the batch, nor the streaming model is sufficiently universal to me. AFAIK they started working this year on continuous applications, which combines some of the batch/stream properties, and integrate well with either when appropriate. It is a high-level API that almost completely abstracts away the complexities of distributed computing. Their current implementation leaves out non-monotonic behavior (updates to data and deletes), which is something that I would like to include. Note that this could be added on top, but:
  • The implementation of the API and technologies to keep RDDs available don't scale to the (p2p) web. Size rules out a total view of the topology, latencies and churn are much higher and variable. IMHO, starting over with these things in mind is easier than adding them afterwards.

In a nutshell, the architecture I would like to realize includes clients in applications as shallow replicas. They replicate data that they are interested in, and partake in computations whose result interests them. Minimizing use of their resources when it is not in their interest reduces incentive to freeload, and makes for healthier p2p applications. Within those constraints, I want to offer a similar incremental data-flow API such as Sparks' continuous application API, but allow for non-monotic expressions too. My current plan of approach:

  1. Developing a library for P2P peer sampling
  2. Developing a library for constructing topologies, with a first focus on a topology for map-reduce.
  3. Use Delta-CRDTs as RDDs.
  4. Developing Delta-CRDT processes for common combinators, functional or set-theoretic.

The Akka actor library is a good foundation for some of these tasks. We have a common desire in delta-CRDTs (3). A first open question is whether Akka would be helped by having CRDT processes (4), and if so, in what form? A new question is whether an Akka-P2P (1-2) module is something that fits the roadmap for Akka?

@rkuhn mentions an interest in CRDT persistence and a desire to "decouple the actual datatype and the management of its δ-state from Akka Cluster". With respect to the former, I've been impressed with the parquet format and project Tungsten, is that a direction you would like to take it in?

Regarding the latter, @rkuhn mentions the trouble of removing old nodes. I am guessing that he refers to periodically cleaning up version vectors to reduce the number of replicas, and that including uncontrolled nodes introduces the risk of them re-including cleanup references to replicas? The advantage of delta-CRDTs is that only the replica-identifiers are included relevant to the delta being dispatched, which significantly reduces the problem. The potential problem of unbounded-growth remains, the solution to which is further complicated with the combinators, as elements from derived data-sets re-use the dots from source data-sets. My first intuition would be versioned compactions in a lexicographic product lattice; would need to work out the messy details though...

To finish up a lengthy post, my hope is that these might be interesting ideas to you, and that we can identify the elements fit for Akka's roadmap, as well as setting out concrete actions to realize those parts. Needless to say, I'm happy to contribute to any intersecting concerns!

Akka Sprint plan 2016-04-04

Sprint plan for the core Akka team

3 weeks
Start: 2016-04-04
End: 2016-04-22

Extract commercial features

  • Extract Split Brain Resolver and Diagnostics Recorder so that they can be used as a library on top of OSS Akka

akka-persistence-cassandra

  • eliminate blocking in initialization

Remoting

  • Define goals
    • and non-goals
    • Write-up of conclusions
    • Performance
    • Maintainability
    • Availability
    • Security
  • Discuss design and alternatives
    • UDP vs TCP vs HTTP/2
  • Create prototypes
    • Aeron
    • “Simple UDP” prototype if Aeron does not fit
    • Stream TCP
    • Monitoring, benchmarks
      • Flight recorder
  • Define what is needed for “rolling upgrades” (stable protocol)
    • Q: think about serialization formats
    • No java serialization
  • Handshake (uid)
  • Large messages
  • Priority of channels, fairness
  • backpressure mechanisms
  • Q: is doing “cross-datacenter” a goal?
  • Keep monitoring team in the loop

Kafka Connector

  • Continue work on the Kafka connector based on feedback and known outstanding tasks, such as Java API, fixing bugs and more comprehensive tests. Hopefully the community will be involved and help us with this.
  • Learn about Kafka Streams

Akka Http

  • Complete Java DSL
  • Allocated time for maintenance and community mentoring
  • Document workaround for half-closed websocket
  • Triaging Http issues

Akka Streams

  • Rewrite the remaining Akka Streams internals that are not using GraphStage (groupBy). We might get some help from the community with this.
    • We should also do TLS, but not in this sprint

Bugs

  • Start count: 34
  • Goal: fix ~8

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Test failures

  • Start count: 34
  • Goal: no repeating, decreasing number

Knowledge sharing

  • The triaging of Http issues is sharing info basically
  • More knowledge sharing in the f2f team meeting, e.g. current remoting

Other

  • scalariform update - ASAP
  • update RP 16s01,15v01, 15v09 tags
  • triage issues

Releases

  • 2.4.4 as soon as the Http Java DSL is merged
  • one or more akka-reactive-kafka milestone releases
  • akka-persistence-cassandra bug fix release

Out of Scope

Things we considered, but didn’t make it in this sprint.

  • Optimize non-persistent http connections
  • Optimize materialization speed
    • might be another approach to solving non-persistent http connections
    • enabler for melding
  • The Hub
  • GraphStage TLS
  • half-closed websocket madness (#19957)
  • Akka website?

Akka Typed: API discussion

Today we reviewed the current typed Actor API for Scala. We tried to improve a few things:

  • make the naming of the factory messages more intuitive
  • don't require msg match everywhere (which is needed because the callbacks are defined with 2 parameters, context and message)
  • try to unify usages of Deferred (which would be mostly used for behavior definitions relying on mutable internal state) with the more functional API (like Stateful)

We attempted a prototype at how it could look like. The basic ideas are those:

  • (re)introduce withContext as the single way to access the context
  • Behavior is now opaque to the user, internally different subclasses are supposed to be used, whatever runs the behavior will have to distinguish these subclasses (like it already does now for Same and Unhandled)
  • we might get rid of the preStart signal

We reimplemented the chatroom example using different kind of styles using the new API. We currently favor the chatRoomMutable and chatRoomFunctional kind of doing things. (We understand that the chatRoomFunctional example will use one extra lambda allocation to support withContext which might be fine. chatRoomFunctionalRecursive improves upon that but may be harded to read).

object PotentialAPIImprovements {
  trait Ctx[T] {
    def self: ActorRef[T]
    def deriveActorRef[U](f: U  T): ActorRef[U]
    def watch(actorRef: ActorRef[_]): Unit
  }

  trait Behavior[T]

  def Same[T]: Behavior[T] = ???

  def withContext[T](f: Ctx[T]  Behavior[T]): Behavior[T] = ???

  def handleMessages[T](f: T  Behavior[T]): Behavior[T] = handleMessagesAndSignals(f)(PartialFunction.empty)
  def handleMessagesAndSignals[T](f: T  Behavior[T])(signals: PartialFunction[Signal, Behavior[T]]): Behavior[T] = ???

  def recursively[T, U](initialState: U)(f: (U  Behavior[T])  U  Behavior[T]): Behavior[T] = ???

  // current `Stateful` for comparison
  def functionalTwoArgs[T](f: (Ctx[T], T)  Behavior[T]): Behavior[T] = ???
}

object TestExample {
  sealed trait Command
  final case class GetSession(screenName: String, replyTo: ActorRef[SessionEvent])
    extends Command
  //#chatroom-protocol
  //#chatroom-behavior
  private final case class PostSessionMessage(screenName: String, message: String)
    extends Command
  //#chatroom-behavior
  //#chatroom-protocol

  sealed trait SessionEvent
  final case class SessionGranted(handle: ActorRef[PostMessage]) extends SessionEvent
  final case class SessionDenied(reason: String) extends SessionEvent
  final case class MessagePosted(screenName: String, message: String) extends SessionEvent

  final case class PostMessage(message: String)

  import PotentialAPIImprovements._

  def chatRoomMutable: Behavior[Command] =
    withContext[Command] { ctx 
      var sessions: List[ActorRef[SessionEvent]] = Nil

      handleMessagesAndSignals[Command]({
        case GetSession(screenName, client) 
          val wrapper = ctx.deriveActorRef {
            p: PostMessage  PostSessionMessage(screenName, p.message)
          }
          client ! SessionGranted(wrapper)
          sessions ::= client
          ctx.watch(client)
          Same
        case PostSessionMessage(screenName, message) 
          val mp = MessagePosted(screenName, message)
          sessions foreach (_ ! mp)
          Same
      })({
        case Terminated(ref) 
          sessions = sessions.filterNot(_ == ref)
          Same
      })
    }

  def chatRoomFunctional: Behavior[Command] = {
    def state(sessions: List[ActorRef[SessionEvent]]): Behavior[Command] =
      handleMessagesAndSignals[Command]({
        case GetSession(screenName, client) 
          withContext { ctx 
            val wrapper = ctx.deriveActorRef {
              p: PostMessage  PostSessionMessage(screenName, p.message)
            }
            client ! SessionGranted(wrapper)
            ctx.watch(client)
            state(client :: sessions)
          }
        case PostSessionMessage(screenName, message) 
          val mp = MessagePosted(screenName, message)
          sessions foreach (_ ! mp)
          Same
      })({
        case Terminated(ref)  state(sessions.filterNot(_ == ref))
      })

    state(Nil)
  }

  def chatRoomFunctionalRecursive: Behavior[Command] =
    withContext { ctx 
      recursively(List.empty[ActorRef[SessionEvent]]) { (state: List[ActorRef[SessionEvent]]  Behavior[Command])  sessions: List[ActorRef[SessionEvent]] 
        handleMessagesAndSignals[Command]({
          case GetSession(screenName, client) 
            val wrapper = ctx.deriveActorRef {
              p: PostMessage  PostSessionMessage(screenName, p.message)
            }
            client ! SessionGranted(wrapper)
            ctx.watch(client)
            state(client :: sessions)
          case PostSessionMessage(screenName, message) 
            val mp = MessagePosted(screenName, message)
            sessions foreach (_ ! mp)
            Same
        })({
          case Terminated(ref)  state(sessions.filterNot(_ == ref))
        })
      }
    }

  def chatRoomStatefulStyle: Behavior[Command] = {
    def state(sessions: List[ActorRef[SessionEvent]] = Nil): Behavior[Command] =
      functionalTwoArgs[Command] { (ctx, msg) 
        msg match {
          case GetSession(screenName, client) 
            val wrapper = ctx.deriveActorRef {
              p: PostMessage  PostSessionMessage(screenName, p.message)
            }
            client ! SessionGranted(wrapper)
            state(client :: sessions)
          case PostSessionMessage(screenName, message) 
            val mp = MessagePosted(screenName, message)
            sessions foreach (_ ! mp)
            Same
        }
      }

    state()
  }
}

WDYT, @rkuhn?

Event sourcing for Akka Streams

I just published a proposal how to add event sourcing to Akka Streams. It proposes an EventSourcing graph stage that, when joined with an event log, is a stream-based equivalent to a PersistentActor but with type-safety and back-pressure for the whole event sourcing message flow. The proposal also implements a stream API on top of Akka Persistence journals and Apache Kafka to define a public interface for event logs.

Please note that this proposal is not a complete specification but rather illustrates some initial ideas and verifies them with a prototype implementation. In its current state, it should mainly serve as a basis to discuss the general approach (rather than implementation details). The proposal might become a later contribution if there is enough interest.

Akka HTTP - 2nd free 4 all call

Hi there,
we thought it might be nice to have another call like last time - was very enjoyable.

Agenda:

  • Think about the compatibility guarantees for upcoming versions (source vs binary, fix vs minor vs major release) akka/akka-http#439
  • streaming marshalling support, see akka/alpakka#63 akka/akka#21826
  • HTTP/2 status
  • Being mostly involved in the documentation improvements two things come to mind: status of akka/akka-http#11 and akka/akka-http#527 (with related Paradox tickets)
  • akka http client - what's missing, how to get started about improving
  • ??? - your topics
  • X-mas wishes 🌲

Proposed time: 7th December 17:00 CET?
Here, for watching: https://www.youtube.com/watch?v=Z_AktNMcomU
Here for talking: https://hangouts.google.com/hangouts/_/ytl/6ijUMp6iQIysGXYR8w4_QPEc4BUdNfcQ2oF2wtEgJO4= (anyone is welcome)

Please post comments in this topic with that would be nice to discuss during the call.

Akka Sprint plan 2016-07-10 – Akka HTTP

Sprint plan for the core Akka team

Topic: Akka HTTP

3 weeks
Start: 2016-07-11
End: 2016-07-29

Vacation warning

Plenty of vacations across the team these weeks, progress will be somewhat slower than usual.

Kafka Connector

  • Remain available for discussions, no active work on it during this sprint
    • (during the next we'll focus there)

Akka Http

  • Prepare baseline measurements
    • Konrad made some last week
  • Collect flamegraphs, fix hotspots
  • Explain how to use Http bench infra to team (EC2 boxes)
  • Short lived connections case
    • Measure, check; don’t go all in on it
  • See where Akka HTTP starts to become overwhelmed and Spray didn’t
    • That’s an area to optimise as well
  • Strengthen community involvement
  • Think and write down versioning moving forward

Akka Streams

  • Optimize materialization speed
    • might be another approach to solving non-persistent http connections
    • enabler for melding
  • Specialize linear graphs (?)

Artery

  • Await community feedback

Akka Typed

  • Coordinate and be involved in discussions

Other

  • Release Java Formatter SBT plugin and use it in Akka

Bugs and failures

  • Bug start count: 26
  • Failure start count: 35

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week). Customer reported bugs may of course change this.

Blog

  • Write one additional post
  • publish one post, maybe in 2nd week, following up on the Streams theme

Releases

  • Akka 2.4.9 at end of Sprint, focused on shipping the Akka HTTP work
  • Akka Stream Contrib on demand (likely do publish one)

Monitoring support

I know that you are looking into monitoring support on the JVM side.
If I can make a wish, I'd like to see an interface that has an API similair to this:

interface IActorMonitor
{
        void PostRestart(ActorRef actor, ....);
        void PostStop(ActorRef actor, ....)
        ....
}

In most cases, monitoring will probably not be interested in individual actors, it just needs an IncrementStopped() kind of API.

But with a monitoring API like the one proposed above.
We can provide other features, related to monitoring.
e.g. debug visualizers that update in realtime when topology changes.

For example, I did a spike on this, visualizing our chat example:

akkagraph

I think that could be extremely valuable to end users to see how their system or a subsection of the system behaves.
And by allowing the monitoring support to reason about the actual actorref that has been affected, this is pretty easy to accomplish.

Distribute Typed Akka Separately

Can this be distributed as a separate artifact. If you want to switch to typed Akka and do not care much for the untyped case. Also other artifacts need to be broken into typed and untyped jars.

(Maybe too early to comment on this improve documentation)

presentation of Java/Scala DSLs for Akka Typed

The goal is to present a state of the API in 2.5.0 that can reasonably be declared final without modifications a few months later.

Requirements

For both languages I’d like to achieve the following:

  • it is possible to get access to the full DSL using a single import statement
  • it is possible to import either the set of combinators directly, or their enclosing scope for disambiguation
  • the name that is imported must be meaningful

Proposal

For Java: create an interface akka.typed.javadsl.Actor that contains static methods for constructing Behaviors, e.g. using Actor.<MyMsg>stateful((ctx, msg) -> ...).

For Scala: create an object akka.typed.scaladsl.Actor that contains the behavior constructors, e.g. Actor.stateful[MyMsg]((ctx, msg) => ...).

The synchronization of these two DSLs would work out nicely because the only required language construct—lambda expressions—works nearly identically in both languages. This does entail a sweeping, breaking change of the current ScalaDSL, though. We could leave that object as is and deprecate it, to be removed when the new DSLs are declared final.

@akka/akka-committers please comment.

Akka Sprint plan 2016-08-02 - Akka Streams Integrations

Start: 2016-08-02
End: 2016-08-19
Retrospective: 2016-08-19

Strategy document

Codename "Project Alpakka"

Some parts of the team on vacations, be warned :)

Artery

  • Await community feedback

Reactive Kafka

  • Review and polish
  • Prepare docs infrastructure for this project (preferably markdown-like based)
  • Attempt to release a stable by the end of Sprint?
    • Coordinate with Reactive Kafka team

Streams Integrations

We'll want to build examples of integrations.
The integrations will not be Lightbend supported, but we may publish them in any case (for example in akka-streams-contrib in some way).

  • JMS
    • Investigate integrations; use as show case for integrating with legacy messaging systems
    • We want just a minimal thing to show how one would integrate
  • FTP
    • possibly have a look at demand and possibilities for FTP originating streams
  • Something to educate users about the connector system
  • Maybe a twitter source
    • Both fun and "live" example
    • Would be a show case for JSON Streaming on client side.
  • We have GraphStage documentation
    • Check if it's enough or not quite for people to grasp building integrations
  • Other integration ideas (again, no promises about deliving, but we'll have a look at some of those)
    • Amqp (http://www.amqp.org)
    • MQTT (Internet of things queues, provided by Amazon IoT, Azure IoT Hub, Heroku plugin: CloudMQTT). PAHO Java library.
    • S3 ?
    • XML streaming (to complement what we have for JSON), real world still has tons of XML

Akka Http

  • Await community feedback
    • once confident release 2.4.9 stable
  • Create akka-meta discussion on Akka HTTP versioning, and figure out how to best address compatibility vs. development once we declare stable

Akka Streams

  • The Hub!
    • Has to be fast
    • Worker pools example
    • Maybe chat example

Akka Typed

Bugs and failures

Bug start count: 26
Failure start count: 37

Work on bugs and test failures are timeboxed to 4 days (Monday and Tuesday the first and second week).

Knowledge sharing

  • Reviewed each other’s work, make sure team is evenly up to date on the various efforts while we spread out on the integrations

Blog

  • Write Akka HTTP Streaming blog post
    • Show how to build a custom one
    • encourage community to build them
  • Write more blog posts about integration
    • No need to publish them yet, but each integration should have a post with it

Releases

  • 2.4.9 once confirmed to be stable
  • Akka streams contrib during this sprint

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.