Giter VIP home page Giter VIP logo

apm-integration-testing's Introduction

APM Integration Testing

This repo contains tools for end-to-end (eg agent -> apm server -> elasticsearch <- kibana) development and testing of Elastic APM.

Build Status

Prerequisites

The basic requirements for starting a local environment are:

  • Docker
  • Docker Compose
  • Python (version 3 preferred)

This repo is tested with Python 3 but starting/stopping environments work with Python 2.7 is supported on a best-efforrt basis. To change the default PYTHON version you have to set PYTHON environment variable to something like PYTHON=python2.

Docker

Installation instructions

Docker compose

Installation instructions

Python 3

Running Local Enviroments

Starting an Environment

./scripts/compose.py provides a handy CLI for starting a testing environment using docker-compose. make venv creates a virtual environment with all of the python-based dependencies needed to run ./scripts/compose.py - it requires virtualenv in your PATH. Activate the virtualenv with source venv/bin/activate and use ./scripts/compose.py --help for information on subcommands and arguments. Finally, you can execute the following command to list all available parameter to start the environment ./scripts/compose.py start --help.

APM LocalEnv Quickstart

Logging in

By default, Security is enabled, which means you need to log into Kibana and/or authenticate with Elasticsearch. An assortment of users is provided to test different scenarios:

  • admin
  • apm_server_user
  • apm_user_ro
  • kibana_system_user
  • *_beat_user

The password for all default users is changeme.

Stopping an Environment

All services:

./scripts/compose.py stop

# OR

docker-compose down

All services and clean up volumes and networks:

make stop-env

Individual services:

docker-compose stop <service name>

Example environments

We have a list with the most common flags combination that we internally use when developing our APM solution. You can find the list here:

Persona Flags Motivation / Use Case Team Comments
Demos & Screenshots ./scripts/compose.py start --release --with-opbeans-dotnet --with-opbeans-go --with-opbeans-java --opbeans-java-agent-branch=pr/588/head --force-build --with-opbeans-node --with-opbeans-python --with-opbeans-ruby --with-opbeans-rum --with-filebeat --with-metricbeat 7.3.0 demos, screenshots, ad hoc QA. It's also used to send heartbeat data to the cluster for Uptime PMM Used for snapshots when close to a release, without the --release flag
Development ./scripts/compose.py start 7.3 --bc --with-opbeans-python --opbeans-python-agent-local-repo=~/elastic/apm-agent-python Use current state of local agent repo with opbeans APM Agents
./scripts/compose.py start 7.3 --bc --start-opbeans-deps This flag would start all opbeans dependencies (postgres, redis, apm-server, ...), but not any opbeans instances APM Agents This would help when developing with a locally running opbeans. Currently, we start the environment with a --with-opbeans-python flag, then stop the opbeans-python container manually
Developer ./scripts/compose.py start main --no-apm-server Only use Kibana + ES in desired version for testing APM Server
Developer ./scripts/compose.py start --release 7.3 --no-apm-server Use released Kibana + ES, with custom agent and server running on host, for developing new features that span agent and server. APM Agents If --opbeans-go-agent-local-repo worked, we might be inclined to use that instead of running custom apps on the host. Would have been handy while developing support for breakdown graphs. Even then, it's probably still faster to iterate on the agent without involving Docker.
Developer ./scripts/compose.py start main --no-kibana Use newest ES/main, with custom kibana on host, for developing new features in kibana APM
Developer ./scripts/compose.py start 6.3 --with-kafka --with-zookeeper --apm-server-output=kafka --with-logstash --with-opbeans-python Testing with kafka and logstash ingestion methods APM
Developer ./scripts/compose.py start main --no-kibana --with-opbeans-node --with-opbeans-rum --with-opbeans-x Developing UI features locally APM UI
Developer ./scripts/compose.py start main --docker-compose-path - --skip-download --no-kibana --with-opbeans-ruby --opbeans-ruby-agent-branch=main > docker-compose.yml Developing UI features againt specific configuration APM UI We sometimes explicity write a docker-compose.yml file and tinker with it until we get the desired configuration becore running docker-compose up
Developer scripts/compose.py start ${version} Manual testing of agent features APM Agents
Developer ./scripts/compose.py start main --with-opbeans-java --opbeans-java-agent-branch=pr/588/head --apm-server-build https://github.com/elastic/apm-server.git@main Test with in-progress agent/server features APM UI
Developer ./scripts/compose.py start 7.0 --release --apm-server-version=6.8.0 Upgrade/mixed version testing APM Then, without losing es data, upgrade/downgrade various components
Developer ./scripts/compose.py start --with-opbeans-python --with-opbeans-python01 --dyno main Spin up a scenario for testing load-generation. APM The management interface will be available at http://localhost:9000

Change default ports

Expose Kibana on http://localhost:1234:

./scripts/compose.py start main --kibana-port 1234

Opbeans

Opbeans are demo web applications that are instrumented with Elastic APM. Start opbeans-* services and their dependencies along with apm-server, elasticsearch, and kibana:

./scripts/compose.py start --all main

This will also start the opbeans-load-generator service which, by default, will generate random requests to all started backend Opbeans services. To disable load generation for a specific service, use the --no-opbeans-XYZ-loadgen flag.

Opbeans RUM does not need a load generation service, as it is itself generating load using a headless Chrome instance.

Start Opbeans with a specific agent branch

You can start Opbeans with an agent which is built from source from a specific branch or PR. This is currently only supported with the Go and the Java agent.

Example which builds the elastic/apm-agent-java#588 branch from source and uses an APM server built from main:

./scripts/compose.py start main --with-opbeans-java --opbeans-java-agent-branch=pr/588/head --apm-server-build https://github.com/elastic/apm-server.git@main

Note that it may take a while to build the agent from source.

Another example, which installs the APM Python Agent from the main branch for testing against opbeans-python (for example, for end to end log correlation testing):

./scripts/compose.py start main --with-opbeans-python --with-filebeat --opbeans-python-agent-branch=main --force-build

Note that we use --opbeans-python-agent-branch to define the Python agent branch for opbeans-python, rather than --python-agent-package, which only applies to the --with-python-agent-* flags for the small integration test apps.

Uploading Sourcemaps

The frontend app packaged with opbeans-node runs in a production build, which means the source code is minified. The APM server needs the corresponding sourcemap to unminify the code.

You can upload the sourcemap with this command:

./scripts/compose.py upload-sourcemap

In the standard setup, it will find the config options itself, but they can be overwritten. See

./scripts/compose.py upload-sourcemap --help

Kafka output

./scripts/compose.py start --with-kafka --with-zookeeper --apm-server-output=kafka --with-logstash main

Logstash will be configured to ingest events from Kafka.

Topics are named according to service name. To view events for 1234_service-12a3:

docker exec -it localtesting_6.3.0-SNAPSHOT_kafka kafka-console-consumer --bootstrap-server kafka:9092 --topic apm-1234_service-12a3 --from-beginning --max-messages 100

Onboarding events will go to the apm topic.

Note that index templates are not loaded automatically when using outputs other than Elasticsearch. Create them manually with:

./scripts/compose.py load-dashboards

If data was inserted before this point (eg an opbeans service was started) you'll probably have to delete the auto-created apm-* indexes and let them be recreated.

๐Ÿฆ– Load testing in Dyno mode

The APM Integration Test Suite includes the ability to create an environment that is useful for modeling various scenarios where services are failed or experiencing a variety of constraints, such as network, memory, or network pressure.

Starting the APM Integration Test suite with the ability to generate load and manipulate the performance characteristics of the individual chararteristics is called Dyno Mode.

To enable Dyno Mode, apped the --dyno flag to the arguments given to ./script/compose.py. When this flag is passed, the test suite will start as it normally would, but various additional components will be enabled which all the user to generate load for various services and to manipulate the performance of various components.

After starting in Dyno Mode, navigate to http://localhost:9000 in your browser. There, you should be presented with a page which shows the Opbeans which are running, along with any dependent services, such as Postgres or Redis.

A pane on the left-hand side of the window allows load-generate to be started and stopped by clicking the checkbox for the Opbean(s) you wish to apply load to. Unchecking the box for an Opbean in this pane will cause load-generation to cease.

After load generation is started, the number of requests can be adjusted by moving the W slider for the relevant Opbean up or down. To control the likelihood that a request will result in an error in the application which can be seen in APM, use the E slider to adjust the error rate. Moving the slider up will result in a higher percentage of requests being errors.

Supported Dyno Opbeans

Not all Opbeans are supported for use with Dyno

Opbean Supported
Python โœ…
Go ๐Ÿ”ฒ
.NET ๐Ÿ”ฒ
Java ๐Ÿ”ฒ
Node ๐Ÿ”ฒ
Ruby ๐Ÿ”ฒ

Using Dyno with a remote APM Server

It is possible to connect the infrastructure generated by compose.py to a remote APM Server. This makes possible to send APM data from the scenarios modeled with Dyno. The following command will launch Opbeans-python and Dyno locally, the Opbeans-python is configured as the environment local of the service dyno-service, and using the version name test-demo

APM_SERVER_URL=https://apm.example.com \
APM_TOKEN=MySuPerApMToKen \
python3 ./scripts/compose.py start 8.0.0 \
  --no-kibana \
  --no-elasticsearch \
  --dyno  \
  --apm-server-url "${APM_SERVER_URL}" \
  --apm-server-secret-token="${APM_TOKEN}" \
  --with-opbeans-python \
  --opbeans-python-service-environment local \
  --opbeans-node-service-version test-demo \
  --opbeans-python-service-name dyno-service

when the Docker container started you can connect to the Dyno UI at http://localhost:9000 and modify the scenario to cause errors, in this case we have disabled PostgreSQL to cause a database service error.

Then we can check the result of our changes in Dyno in the APM UI, in this case we can see that the error rate for postgreSQL is increasing.

Introducing failure into the network

For each service, different classes of network failure can be introduced and adjusted with their respective sliders. They are as follows:

Slider key Slider name Description
L Latency Adds latency to all data. The overall delay is equal to latency +/- jitter.
J Jitter Adds jitter to all data. The overall delay is equal to latency +/- jitter.
B Bandwidth The overall amount of bandwidth available to the connection
T Timeout Stops all data from getting through, and closes the connection after timeout. If timeout is 0, the connection won't close, and data will be delayed until the timeout is increased.
Sas Packet slice average size Slice TCP packets into this average size
Ssd Packet slice average delay Introduce delay between the transmission of each packet

Modifying system properties

The container for each sevice may be instantly resized to the amount of available CPU power or total memory up or down through use of the CPU and Memory slider respectively.

Enabling/disabling services

Unchecking the button immediately above the panel of sliders for each service will immediately cut off access to that service. The service itself will remain up but no traffic will be routed to it. Re-checking the box will immediately restore the network connectivity for the service.

Debugging problems

Load generation is on but no requests are arriving

Occasionally, all network routing will fail due to an unresolved bug with thread-safety in the proxy server. If you expect load-generation to be running but you do not see any traffic arriving at the Opbeans, it is possibly that the network proxy has crashed. To quickly restore it, run docker-compose restart toxi. Traffic should be immediatley restored.

Advanced topics

Dumping docker-compose.yml

./scripts/compose.py start main --docker-compose-path - --skip-download will dump the generated docker-compose.yml to standard out (-) without starting any containers or downloading images.

Omit --skip-download to just download images.

Testing compose

compose.py includes unittests, make test-compose to run.

Jaeger

APM Server can work as a drop-in replacement for a Jaeger collector and ingest traces directly from a Jaeger agent via gRPC.

To test Jaeger/gRPC, start apm-integration-testing, run the Jaeger Agent, and start the Jaeger hotrod demo:

./scripts/compose.py start 7.13 \
--apm-server-secret-token="abc123"
docker run --rm -it --name jaeger-agent --network apm-integration-testing -p6831:6831/udp \
-e REPORTER_GRPC_HOST_PORT=apm-server:8200 \
-e AGENT_TAGS="elastic-apm-auth=Bearer abc123" \
jaegertracing/jaeger-agent:latest
docker run --rm -it --network apm-integration-testing \
-e JAEGER_AGENT_HOST=jaeger-agent \
-e JAEGER_AGENT_PORT=6831 \
-p8080-8083:8080-8083 jaegertracing/example-hotrod:latest all

Finally, navigate to http://localhost:8080/ and click around to generate data.

Running Tests

Additional dependencies are required for running the integration tests:

  • python3
  • virtualenv

On a Mac with Homebrew:

brew install pyenv-virtualenv

All integration tests are written in python and live under tests/.

Several make targets exist to make their execution simpler:

  • test-server
  • test-kibana

These targets will create a python virtual environment in venv with all of the dependencies need to run the suite.

Each target requires a running test environment, providing an apm-server, elasticsearch, and others depending on the particular suite.

Tests should always eventually be run within a Docker container to ensure a consistent, repeatable environment for reporting.

Prefix any of the test- targets with docker- to run them in a container eg: make docker-test-server.

Network issues diagnose

It is possible to diagnose network issues related with lost documents between APM Agent, APM server, or Elasticsearch.

In order to do so, you have to add the --with-packetbeat argument to your command line.

When you add this argument an additional Docker container running Packetbeat is attached to the APM Server Docker container, this container will grab information about the communication between APM Agent, APM server, and Elasticsearch that you can analyze in case of failure.

When a test fails, data related to Packetbeat and APM is dumped with elasticdump into a couple of files /app/tests/results/data-NAME_OF_THE_TEST.json and /app/tests/results/packetbeat-NAME_OF_THE_TEST.json

Continuous Integration

Jenkins runs the scripts from .ci/scripts and is viewable at https://apm-ci.elastic.co/.

Those scripts shut down any existing testing containers and start a fresh new environment before running tests unless the REUSE_CONTAINERS environment variable is set.

These are the scripts available to execute:

  • common.sh: common scripts variables and functions. It does not execute anything.

  • unit-tests.sh: runs the unit tests for the apm-integration-testing app and validate the linting, you can choose the versions to run see the environment variables configuration.

Environment Variables

It is possible to configure some options and versions to run by defining environment variables before to launch the scripts.

  • COMPOSE_ARGS: replaces completely the default arguments compose.py used by scripts, see the compose.py help to know which ones you can use.

  • DISABLE_BUILD_PARALLEL: by default Docker images are built in parallel. If you set DISABLE_BUILD_PARALLEL=true then the Docker images will build in series, which helps to make the logs more readable.

  • BUILD_OPTS: aggregates arguments to default arguments passing to compose.py. See compose.py help to know which ones you can use.

  • ELASTIC_STACK_VERSION: selects the Elastic Stack version to use on tests. By default is is used the main branch. You can choose any branch or tag from the Github repo.

  • APM_SERVER_BRANCH: selects the APM Server version to use on tests. By default it uses the main branch. You can choose any branch or tag from the Github repo.

Testing docker images

Tests are written using bats under the docker/tests dir

make -C docker test-<app>
make -C docker test-opbeans-<agent>

Test all the docker images for the Opbeans

make -C docker all-opbeans-tests

apm-integration-testing's People

Contributors

apmmachine avatar axw avatar basepi avatar beniwohli avatar bmorelli25 avatar cachedout avatar dependabot[bot] avatar dgieselaar avatar eyalkoren avatar felixbarny avatar github-actions[bot] avatar graphaelli avatar gregkalapos avatar hmdhk avatar jahtalab avatar jalvz avatar jamiesmith avatar kuisathaverat avatar marclop avatar mdelapenya avatar roncohen avatar russcam avatar sergeykleyman avatar simitt avatar sorenlouv avatar trentm avatar v1v avatar vigneshshanmugam avatar watson avatar yngrdyn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apm-integration-testing's Issues

Run older builds against matching release

6.2 snapshots are no longer produced but we want to ensure some agents are still compatible with a 6.2 stack. version tests currently only run against snapshots - let's make them optionally work with releases.

test oss artifacts

everything should still work except:

  • kibana templates should not load
  • no monitoring
  • ui should be disabled

make dashboard loading depend on kibana

If kibana is not started, disable/don't enable dashboard loading for apm-server, filebeat, and metricbeat. Right now [in apm-integration-testing] apm-server doesn't load dashboards by default because of this dependency handling. Likewise, filebeat and metricbeat will not start without kibana since they are always set to load dashboards.

Unified opbeans load generator service

Instead of having a load generator for each service, we'd like to have a single service that generates load on a configurable list of services

  • image that takes a list of base URLs to generate load on
  • flags in compose.py for each opbeans service, e.g. --load-gen-python, --load-gen-go
  • ensure all opbeans services listen on internal port 3000
    • node
    • python
    • ruby
    • go
    • java

Make use of local testing setup

Figure out what the best way is to make use of the local testing setup and eventually merge the docker setup configurations.

build failures

Lots of build failures lately:

17:18:44 waiting for services to be healthy
17:18:44 docker-compose-wait
17:18:45 Some processes failed
17:18:45 Makefile:72: recipe for target 'dockerized-test' failed

apm-server is not coming up properly:

localtesting_7.0.0-alpha1_apm-server | {"level":"info","timestamp":"2018-09-11T21:39:22.464Z","caller":"instance/beat.go:373","message":"apm-server stopped."}
localtesting_7.0.0-alpha1_apm-server | {"level":"error","timestamp":"2018-09-11T21:39:22.464Z","caller":"instance/beat.go:757","message":"Exiting: Error importing Kibana dashboards: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: parsing kibana response: invalid character 'K' looking for beginning of value. Response: Kibana server is not ready yet."}
localtesting_7.0.0-alpha1_apm-server | Exiting: Error importing Kibana dashboards: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: parsing kibana response: invalid character 'K' looking for beginning of value. Response: Kibana server is not ready yet.

The kibana healthcheck isn't holding apm-server back long enough for kibana to really be available.

Clarify readme on how to run Opbeans sample apps

It is not clear which requirements are needed to run the Opbeans apps, the readme says you need make env and virtual env but in theory is not needed.

We should have really clear instructions on how to run only opbeans maybe even separate them given that most of the people will only be running these.

Test version compatibility

Support testing different versions against each other.

  • Test agent versions against apm-server versions
  • Test apm-server versions against elasticsearch and kibana versions

As a minium current master versions and versions according to official support matrix should be tested against each other.

Tests should work with a local dockerized setup, where different versions of services must be downloaded and started, as well as with running instances where only a URL is passed in (goal for cloud testing).

Add Selenium support to tests.

Check out selenium support on Jenkins and available plugins.

Build basic selenium test structure.

Add selenium tests for dashboards.

Add selenium tests for curated UI.

add options for elasticsearch license management

When starting a stack on 6.2, Elasticsearch comes with a platinum trial license. As of 6.3, the basic license is the default.

Proposal: default to platinum trial license in 6.3+, making the behavior consistent and provide an option to start without that license.

Add json schema validation tests for agents

Check out the json specs from the server and test the agent payload that is sent against those specs in a strict manner.

Discuss wheter these tests should be here or in the single agent repositories.

[APM-CI][Integration test] Unify version/package/branch/whatever parameters names for compose.py and scripts

We are modifying the composer.py to support using a version from a commit SHA, reviewing the configuration of agents and opbeans modules, we saw that there are no common criteria to name the parameters (go-agent-version, nodejs-agent-package, rum-agent-branch, ...) IMHO should be only one way, I mean, you want to set the version/package/branch/whatever to use, if we have 3 different ways to set it you have to remember 3 different parameters. I will open a discussion about it.

WDYT?

@elastic/apm-server @elastic/apm-agent-devs

make force build skip docker cache

While using the docker cache changes in agent repos do not trigger container image rebuilds.

--force-build currently adds the --build option to docker-compose up - that does not skip the docker cache. Add an extra step to docker-compose build --no-cache when --force-build is enabled.

Setup tests on Jenkins

Create Jenkins setup to automatically run tests.

Start out with a daily run of the full testsuite.

Finetuning can be done later, so that e.g. version compatibility is checked as soon as something gets merged to a specific version.

Docker issues? Shard failures, not being able to see transactions data in Discover

Not sure if it's my local docker getting corrupted or if it's a new issue, but it seems that Discover won't load any data when filtered. It's especially evident when opening transaction sample data in Discover (like the link we have on the Transaction detail pages)

screen shot 2018-09-03 at 20 02 49

2018-08-31 11-24-46 2018-08-31 11_27_39

Perhaps this isn't even related to Docker setup but I've talked it over with @sqren and he thought it might be the integration testing Docker that are failing.

disable opbeans loadgen when no services need it

compose.py --with-opbeans-python --no-opbeans-python-loadgen should realize that no services need load generation and prevent addition of the service. Currently, we end up with a service with empty OPBEANS_URLS instead.

"opbeans-load-generator": {
  "container_name": "localtesting_7.0.0-alpha1_opbeans-load-generator",
  "depends_on": {},
  "environment": [
    "OPBEANS_URLS=",
    "OPBEANS_RPMS="
  ],
  "image": "opbeans/opbeans-loadgen:latest",
  "logging": {
    "driver": "json-file",
    "options": {
      "max-file": "5",
      "max-size": "2m"
    }
  }
},

Investigate flaky tests for python agent.

After fixing some bugs and after merging a version update PR for the python agent tests became flaky for the python agent, see e.g. https://apm-ci.elastic.co/job/elastic+apm-integration-testing+master+multijob-python-agent-versions/205/.

tornado is used to send concurrent requests to the example apps. This works fine with other agents and apps, but for the python apps we do receive 500 responses since last week.
see e.g.

13:46:50 ----------------------------- Captured stderr call -----------------------------
13:46:50 [2018-05-21 13:44:42,693] [10] [INFO] [run -             263]  Testing started..
13:46:50 [2018-05-21 13:44:42,693] [10] [INFO] [run -             263]  Testing started..
13:46:50 [2018-05-21 13:44:42,747] [10] [INFO] [run -             267]  Sending batch 1 / 1
13:46:50 [2018-05-21 13:44:42,747] [10] [INFO] [run -             267]  Sending batch 1 / 1
13:46:50 [2018-05-21 13:44:42,820] [10] [INFO] [load_test -             101]  Starting tornado I/O loop
13:46:50 [2018-05-21 13:44:42,820] [10] [INFO] [load_test -             101]  Starting tornado I/O loop
13:46:50 [2018-05-21 13:44:50,074] [10] [ERROR] [handle -             85]  Bad response, aborting: 500 - HTTP 500: INTERNAL SERVER ERROR (0.24900579452514648)
13:46:50 [2018-05-21 13:44:50,074] [10] [ERROR] [handle -             85]  Bad response, aborting: 500 - HTTP 500: INTERNAL SERVER ERROR (0.24900579452514648)
13:46:50 ioloop.py                  638 ERROR    Exception in callback functools.partial(<function wrap.<locals>.null_wrapper at 0x7fcb65a072f0>, HTTPResponse(_body=None,buffer=<_io.BytesIO object at 0x7fcb65fb8780>,code=500,effective_url='http://flaskapp:8001/bar?q=1',error=HTTP 500: INTERNAL SERVER ERROR,headers=<tornado.httputil.HTTPHeaders object at 0x7fcb65a0c9e8>,reason='INTERNAL SERVER ERROR',request=<tornado.httpclient.HTTPRequest object at 0x7fcb65a00d68>,request_time=0.24900579452514648,time_info={}))
13:46:50 Traceback (most recent call last):
13:46:50   File "/usr/local/lib/python3.6/site-packages/tornado/ioloop.py", line 605, in _run_callback
13:46:50     ret = callback()
13:46:50   File "/usr/local/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper
13:46:50     return fn(*args, **kwargs)
13:46:50   File "/app/tests/agent/concurrent_requests.py", line 86, in handle
13:46:50     raise Exception(message)
13:46:50 Exception: Bad response, aborting: 500 - HTTP 500: INTERNAL SERVER ERROR (0.24900579452514648)

Opbeans: implement randomized distributed tracing

To have somewhat realistic test data for the UI, the Opbeans services should implement distributed tracing amongst each other. The plan is as follows:

  • when using compose.py, an environment variable, OPBEANS_SERVICES will be set for all opbeans services, containing a list of all opbeans services in a comma-separated list, e.g.

     OPBEANS_SERVICES=opbeans-node,opbeans-python,opbeans-go
    

    these services can either be:

    • "opbeans-xyz", in which case the base URL http://opbeans-xyz:3000 should be used
    • or an URL that starts with http, in which case the URL should be used verbatim as base URL
  • when a GET request to /api/* comes in, the opbeans service flips a coin. Depending on the outcome, it either serves the request itself, or forwards the request via instrumented HTTP-request to one of the services from OPBEANS_SERVICES using the URL http://opbeans-XYZ:3000/original/url.

  • there will be a parameter to define the probability of relaying the call to another node (DT). The default value will be 0.5 and 0 will disable it.

    OPBEANS_DT_PROBABILITY=0.5
    

Note: some opbeans implementations have non-standard endpoints in the /api/ namespace. These shouldn't be affected by the randomized forwarding

Implemented in:

upload-sourcemap command broken

It doesn't appear to have been updated since the unification with localtesting. It should upload the right file with the right parameters by default. Those appear to be:

--sourcemap-file docker/opbeans/node/sourcemaps/main.9fdc3a70.js.map \
--service-version 1.0.0 \
--service-name client \
--bundle-path http://opbeans-node:3000/static/js/main.9fdc3a70.js

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.