Comments (9)
Depends on your definition of "processed" and a number of factors, but in many practical situations, yes. Messages are queued in the order they are "received". When running the front-end single threaded (current situation in staging), this means exactly what you think, but the system is built to scale horizontally. When scaling out, there is no plan for synchronizing across instances of the front end service, and this technically means that two messages could hit different instances of the front end in rapid succession. The resulting order of the messages is not predictable.
Similarly, when running a single worker for a given actor, the messages are processed from the queue in order. When running multiple workers, messages are still pulled in order, but there is no synchronization (this would defeat the purpose of scaling) resulting in unpredictable behavior.
from abaco.
Synchronization isn’t necessary, but a pool of actors should be pulling from their topic in FIFO fashion. If we aren’t seeing ordered processing when running in a single threaded deployment, then we need to take more aggressive action to ensure FIFO is honored from POST to processing.
—
Rion
On Nov 8, 2015, at 2:22 PM, Joe Stubbs [email protected] wrote:
Depends on your definition of "processed" and a number of factors, but in many practical situations, yes. Messages are queued in the order they are "received". When running the front-end single threaded (current situation in staging), this means exactly what you think, but the system is built to scale horizontally. When scaling out, there is no plan for synchronizing across instances of the front end service, and this technically means that two messages could hit different instances of the front end in rapid succession. The resulting order of the messages is not predictable.
Similarly, when running a single worker for a given actor, the messages are processed from the queue in order. When running multiple workers, messages are still pulled in order, but there is no synchronization (this would defeat the purpose of scaling) resulting in unpredictable behavior.
—
Reply to this email directly or view it on GitHub #13 (comment).
from abaco.
No, that's how it is already implemented.
from abaco.
Here is a log from the current images in the Docker Registry, referenced in the repo docker-compose-local.yml
file. At the very least, we need timestamps (ISO 8601, preferred) to find out more. Issue #15 elaborates.
Fresh run, no actors.
$ curl "docker.example.com:8000/actors"
{
"msg": "Actors retrieved successfully.",
"result": [],
"status": "success",
"version": "0.01"
}
Create a new actor, following the README.rst
file.
$ curl -X POST --data "image=jstubbs/abaco_test&name=foo" "docker.example.com:8000/actors"
{
"msg": "Actor created successfully.",
"result": {
"default_environment": {},
"description": null,
"executions": {},
"id": "foo_0",
"image": "jstubbs/abaco_test",
"name": "foo",
"privileged": "FALSE",
"state": "",
"status": "SUBMITTED",
"streaming": "FALSE",
"subscriptions": []
},
"status": "success",
"version": "0.01"
}
Pause a second for it to pull the image and become ready.
$ curl "docker.example.com:8000/actors"
{
"msg": "Actors retrieved successfully.",
"result": [
{
"default_environment": {},
"description": null,
"executions": {},
"id": "foo_0",
"image": "jstubbs/abaco_test",
"name": "foo",
"privileged": "FALSE",
"state": "",
"status": "READY",
"streaming": "FALSE",
"subscriptions": []
}
],
"status": "success",
"version": "0.01"
}
Now that the actor is ready operating with a single instance (no pooling), we submit 10 tasks for the worker to process.
$ for i in {1..10};
> do
> curl -X POST --data "message=[${i}] execute yourself" "docker.example.com:8000/actors/foo_0/messages"
> done
{
"msg": "The request was successful",
"result": {
"msg": "[1] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[2] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[3] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[4] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[5] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[6] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[7] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[8] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[9] execute yourself"
},
"status": "success",
"version": "0.01"
}
{
"msg": "The request was successful",
"result": {
"msg": "[10] execute yourself"
},
"status": "success",
"version": "0.01"
}
Wait a few moments for them to complete and check the actor's execution history.
$ curl "docker.example.com:8000/actors/foo_0/executions"
{
"msg": "Actor executions retrieved successfully.",
"result": {
"ids": [
"foo_0_exc_1",
"foo_0_exc_0",
"foo_0_exc_4",
"foo_0_exc_2",
"foo_0_exc_6",
"foo_0_exc_3",
"foo_0_exc_5",
"foo_0_exc_9",
"foo_0_exc_8",
"foo_0_exc_7"
],
"total_cpu": 225457302,
"total_executions": 10,
"total_io": 7100,
"total_runtime": 20
},
"status": "success",
"version": "0.01"
}
Seems that they executed out of order looking at the list of ids
returned, but perhaps it's just not sorting them.
$ for i in {0..9}; do curl "docker.example.com:8000/actors/foo_0/executions/foo_0_exc_$i" ; done
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 27726020,
"id": "foo_0_exc_0",
"io": 586,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 22597484,
"id": "foo_0_exc_1",
"io": 586,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 22576722,
"id": "foo_0_exc_2",
"io": 766,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 20926070,
"id": "foo_0_exc_3",
"io": 676,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 22684826,
"id": "foo_0_exc_4",
"io": 676,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 22998604,
"id": "foo_0_exc_5",
"io": 676,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 21057728,
"id": "foo_0_exc_6",
"io": 766,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 18149142,
"id": "foo_0_exc_7",
"io": 676,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 24124312,
"id": "foo_0_exc_8",
"io": 926,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
{
"msg": "Actor execution retrieved successfully.",
"result": {
"cpu": 22616394,
"id": "foo_0_exc_9",
"io": 766,
"runtime": 2
},
"status": "success",
"version": "0.01"
}
No timestamp in the response and no indication of ordering, so maybe the logs will give a bit more info.
$ for i in {0..9}; do curl "docker.example.com:8000/actors/foo_0/executions/foo_0_exc_$i/logs" ; done
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [1] execute yourself\nEnvironment:\nHOSTNAME=f94cfe410b78\nSHLVL=1\nHOME=/root\nMSG=[1] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [2] execute yourself\nEnvironment:\nHOSTNAME=aace00f1d2ea\nSHLVL=1\nHOME=/root\nMSG=[2] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [3] execute yourself\nEnvironment:\nHOSTNAME=4ebbfa1021f0\nSHLVL=1\nHOME=/root\nMSG=[3] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [4] execute yourself\nEnvironment:\nHOSTNAME=25575ce5eea5\nSHLVL=1\nHOME=/root\nMSG=[4] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [5] execute yourself\nEnvironment:\nHOSTNAME=bc852f9bc93c\nSHLVL=1\nHOME=/root\nMSG=[5] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [6] execute yourself\nEnvironment:\nHOSTNAME=a5f92c6d8359\nSHLVL=1\nHOME=/root\nMSG=[6] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [7] execute yourself\nEnvironment:\nHOSTNAME=e2803e6987b2\nSHLVL=1\nHOME=/root\nMSG=[7] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [8] execute yourself\nEnvironment:\nHOSTNAME=e7de756163f5\nSHLVL=1\nHOME=/root\nMSG=[8] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [9] execute yourself\nEnvironment:\nHOSTNAME=cd2563bbc5f2\nSHLVL=1\nHOME=/root\nMSG=[9] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [10] execute yourself\nEnvironment:\nHOSTNAME=159159fb4f0c\nSHLVL=1\nHOME=/root\nMSG=[10] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
No info in the logs either, but we do see that the message in the logs corresponds to the id assigned when the message was posting. This indicates the ordering of the execution ids in the actor details represents the order of the ordering of the original message reception.
from abaco.
Yes, exactly. Order is preserved in the single-threaded case as I was saying. The logs are simply the standard out for the containers. We can add time stamps to the execution metadata.
from abaco.
But if you look at the listing of the actor’s executions in the output below, it appears order is NOT preserved. The listings appear in order because I'm querying in a loop by the original message request order. That is not the order given by the executions
endpoint. Is that just how Redis returns the results when no sort is specified, or is that the actual order in which they ran?
—
Rion
On Nov 8, 2015, at 8:48 PM, Joe Stubbs [email protected] wrote:
Yes, exactly. Order is preserved in the single-threaded case as I was saying. The logs are simply the standard out for the containers. We can add time stamps to the execution metadata.
—
Reply to this email directly or view it on GitHub #13 (comment).
from abaco.
Here is the response querying the logs using the ordering supplied by the API response:
$ for i in $(curl -sk "docker.example.com:8000/actors/foo_0/executions" | jq --raw-output '.result.ids[]'); do curl "docker.example.com:8000/actors/foo_0/executions/$i/logs"; done
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [2] execute yourself\nEnvironment:\nHOSTNAME=aace00f1d2ea\nSHLVL=1\nHOME=/root\nMSG=[2] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [1] execute yourself\nEnvironment:\nHOSTNAME=f94cfe410b78\nSHLVL=1\nHOME=/root\nMSG=[1] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [5] execute yourself\nEnvironment:\nHOSTNAME=bc852f9bc93c\nSHLVL=1\nHOME=/root\nMSG=[5] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [3] execute yourself\nEnvironment:\nHOSTNAME=4ebbfa1021f0\nSHLVL=1\nHOME=/root\nMSG=[3] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [7] execute yourself\nEnvironment:\nHOSTNAME=e2803e6987b2\nSHLVL=1\nHOME=/root\nMSG=[7] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [4] execute yourself\nEnvironment:\nHOSTNAME=25575ce5eea5\nSHLVL=1\nHOME=/root\nMSG=[4] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [6] execute yourself\nEnvironment:\nHOSTNAME=a5f92c6d8359\nSHLVL=1\nHOME=/root\nMSG=[6] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [10] execute yourself\nEnvironment:\nHOSTNAME=159159fb4f0c\nSHLVL=1\nHOME=/root\nMSG=[10] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [9] execute yourself\nEnvironment:\nHOSTNAME=cd2563bbc5f2\nSHLVL=1\nHOME=/root\nMSG=[9] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
{
"msg": "Logs retrieved successfully.",
"result": "Contents of MSG: [8] execute yourself\nEnvironment:\nHOSTNAME=e7de756163f5\nSHLVL=1\nHOME=/root\nMSG=[8] execute yourself\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nPWD=/\n",
"status": "success",
"version": "0.01"
}
from abaco.
Yes, there's no ordering enforced by the data structure.
On Sun, Nov 8, 2015 at 8:56 PM, Rion Dooley [email protected]
wrote:
But if you look at the listing of the actor’s executions in the output
below, it appears order is NOT preserved. Is that just how Redis returns
the results when no sort is specified, or is that the actual order in which
they ran?—
RionOn Nov 8, 2015, at 8:48 PM, Joe Stubbs [email protected] wrote:
Yes, exactly. Order is preserved in the single-threaded case as I was
saying. The logs are simply the standard out for the containers. We can add
time stamps to the execution metadata.—
Reply to this email directly or view it on GitHub <
https://github.com/TACC/abaco/issues/13#issuecomment-154905988>.—
Reply to this email directly or view it on GitHub
#13 (comment).
from abaco.
Updated images fixed ordering on the actor response. The list in the execution response needs to be properly sorted by execution order or it's going to throw people off. That's a deal breaker for the people who would use this service. I was able to verify that, as usual, your default behavior was working as expected, even if it didn't quite respond as expected. Gist here: https://gist.github.com/deardooley/722b445675b14d245a3f
from abaco.
Related Issues (20)
- Implement automated garbage collection for actor images
- Abaco performance study
- add makefile to simplify operation of local development stack
- create sample based on image classifier HOT 1
- Aliases should implement an explicit acceptable characters policy HOT 5
- Error using "." characters in aliases
- Implement synchronous "data" endpoint
- Add post-action callbacks HOT 1
- Allow actors to configure log exipry HOT 2
- cron schedule does not run when cronNextEx is in the past HOT 1
- Abaco: Rework autoscaling for multiple sites HOT 1
- Cleanup Abaco repo.
- K8 Cert-manager for easy TLS deployment. HOT 1
- TWDB - Adopter actor effort.
- Get Abaco search spec inline with current Tapis search specification (sort/limit)
- Revamp metrics store
- Fix health in v3. Worker db records can stay alive and worker containers can still alive without db records.
- Fix Abaco metrics check
- Ease-of-work fixes. Github actions and seamless reattempts if network lost.
- Fix worker health variable reference causing problems when no workers exists
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from abaco.