Giter VIP home page Giter VIP logo

aws-xray-daemon's Introduction

Build Status Go Report Card

AWS X-Ray Daemon

The AWS X-Ray daemon is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API.
The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service. For more information, see AWS X-Ray Daemon.

Getting Help

Use the following community resources for getting help with the AWS X-Ray Daemon. We use the GitHub issues for tracking bugs and feature requests.

Sending Segment Documents

The X-Ray SDK sends segment documents to the daemon to avoid making calls to AWS directly. You can send the segment/subsegment in JSON over UDP port 2000 to the X-Ray daemon, prepended by the daemon header : {"format": "json", "version": 1}\n

{"format": "json", "version": 1}\n{<serialized segment data>}

For more details refer : Link

Installing

The AWS X-Ray Daemon is compatible with Go 1.8 and later.

Install the daemon using the following command:

go get -u github.com/aws/aws-xray-daemon/...  

Credential Configuration

The AWS X-Ray Daemon follows default credential resolution for the aws-sdk-go.

Follow the guidelines for the credential configuration.

Daemon Usage (command line args)

Usage: xray [options]

Description
-a --resource-arn Amazon Resource Name (ARN) of the AWS resource running the daemon.
-o --local-mode Don't check for EC2 instance metadata.
-m --buffer-memory Change the amount of memory in MB that buffers can use (minimum 3).
-n --region Send segments to X-Ray service in a specific region.
-b --bind Overrides default UDP address (127.0.0.1:2000).
-t --bind-tcp Overrides default TCP address (127.0.0.1:2000).
-r --role-arn Assume the specified IAM role to upload segments to a different account.
-c --config Load a configuration file from the specified path.
-f --log-file Output logs to the specified file path.
-l --log-level Log level, from most verbose to least: dev, debug, info, warn, error, prod (default).
-p --proxy-address Proxy address through which to upload segments.
-v --version Show AWS X-Ray daemon version.
-h --help Show this screen

Build

make build would build binaries and .zip files in /build folder for Linux, MacOS, and Windows platforms.

Linux

make build-linux would build binaries and .zip files in /build folder for the Linux platform.

MAC

make build-mac would build binaries and .zip files in /build folder for the MacOS platform.

Windows

make build-windows would build binaries and .zip files in /build folder for the Windows platform.

Build for ARM achitecture

Currently, the make build script builds artifacts for AMD architecture. You can build the X-Ray Daemon for ARM by using the go build command and setting the GOARCH to arm64. To build the daemon binary on a linux ARM machine, you can use the following command:

GOOS=linux GOARCH=arm64 go build -ldflags "-s -w" -o xray cmd/tracing/daemon.go cmd/tracing/tracing.go

As of Aug 31, 2020, windows and darwin builds for ARM64 are not supported by go build.

Pulling X-Ray Daemon image from ECR Public Gallery

Before pulling an image you should authenticate your docker client to the Amazon ECR public registry. For registry authentication options follow this link

Run below command to authenticate to public ECR registry using get-login-password (AWS CLI)

aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws

Pull alpha tag from Public ECR Gallery

docker pull public.ecr.aws/xray/aws-xray-daemon:alpha

Pull released version tag from Public ECR Gallery

docker pull public.ecr.aws/xray/aws-xray-daemon:3.2.0

NOTE: We are not recommending to use daemon image with alpha tag in production environment. For production environment customer should pull in an image with released tag.

X-Ray Daemon Performance Report

EC2 Instance Type: T2.Micro [1 vCPU, 1 GB Memory]

Collection time: 10 minutes per TPS (TPS = Number of segments sent to daemon in 1 second)

Daemon version tested: 3.3.6

TPS Avg CPU Usage (%) Avg Memory Usage (MB)
0 0 17.07
100 0.9 28.5
200 1.87 29.3
400 3.76 29.1
1000 9.36 29.5
2000 18.9 29.7
4000 38.3 29.5

Testing

make test will run unit tests for the X-Ray daemon.

License

This library is licensed under the Apache 2.0 License.

aws-xray-daemon's People

Contributors

atshaw43 avatar belindac avatar bhautikpip avatar billthedozer avatar carolabadeer avatar defond0 avatar dependabot[bot] avatar dgtm avatar gliptak avatar grosa1 avatar haotianw465 avatar ilpianista avatar jj22ee avatar kiranmeduri avatar lewayne-aws avatar luben avatar luluzhao avatar mousedownmike avatar mschfh avatar rosswilson avatar shayaantx avatar shengxil avatar srprash avatar vissree avatar vmanikes avatar wangzlei avatar willarmiros avatar zhengyal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-xray-daemon's Issues

Failed to start X-Ray Daemon

Since moving to v2.1.2 the X-Ray Daemon will no longer start on our Ubuntu servers.

Files pulled with:
wget https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-2.x.deb

And installed with:
sudo dpkg -i aws-xray-daemon-2.x.deb

However, upon running:
sudo service xray status

I see the following errors:

● xray.service - AWS X-Ray Daemon
Loaded: loaded (/lib/systemd/system/xray.service; disabled; vendor preset: enabled)
Active: inactive (dead)

May 17 20:46:10 ip-10-2-112-10 systemd[1]: xray.service: Unit entered failed state.
May 17 20:46:10 ip-10-2-112-10 systemd[1]: xray.service: Failed with result 'exit-code'.
May 17 20:46:10 ip-10-2-112-10 systemd[1]: xray.service: Service hold-off time over, scheduling restart.
May 17 20:46:10 ip-10-2-112-10 systemd[1]: Stopped AWS X-Ray Daemon.
May 17 20:46:10 ip-10-2-112-10 systemd[1]: xray.service: Start request repeated too quickly.
May 17 20:46:10 ip-10-2-112-10 systemd[1]: Failed to start AWS X-Ray Daemon.
May 17 20:46:18 ip-10-2-112-10 systemd[1]: [/lib/systemd/system/xray.service:18] Unknown lvalue 'LogsDirectory' in section 'Service'
May 17 20:46:18 ip-10-2-112-10 systemd[1]: [/lib/systemd/system/xray.service:19] Unknown lvalue 'LogsDirectoryMode' in section 'Service'
May 17 20:46:18 ip-10-2-112-10 systemd[1]: [/lib/systemd/system/xray.service:20] Unknown lvalue 'ConfigurationDirectory' in section 'Service'
May 17 20:46:18 ip-10-2-112-10 systemd[1]: [/lib/systemd/system/xray.service:21] Unknown lvalue 'ConfigurationDirectoryMode' in section 'Service'

I've tried to remove those lines from the /lib/systemd/system/xray.service file but it still will not start due to a group issue:

● xray.service - AWS X-Ray Daemon
Loaded: loaded (/lib/systemd/system/xray.service; disabled; vendor preset: enabled)
Active: inactive (dead)

May 17 21:05:51 ip-10-2-112-10 systemd[1]: xray.service: Service hold-off time over, scheduling restart.
May 17 21:05:51 ip-10-2-112-10 systemd[1]: Stopped AWS X-Ray Daemon.
May 17 21:05:51 ip-10-2-112-10 systemd[1]: Started AWS X-Ray Daemon.
May 17 21:05:51 ip-10-2-112-10 systemd[1]: xray.service: Main process exited, code=exited, status=216/GROUP
May 17 21:05:51 ip-10-2-112-10 systemd[1]: xray.service: Unit entered failed state.
May 17 21:05:51 ip-10-2-112-10 systemd[1]: xray.service: Failed with result 'exit-code'.
May 17 21:05:52 ip-10-2-112-10 systemd[1]: xray.service: Service hold-off time over, scheduling restart.
May 17 21:05:52 ip-10-2-112-10 systemd[1]: Stopped AWS X-Ray Daemon.
May 17 21:05:52 ip-10-2-112-10 systemd[1]: xray.service: Start request repeated too quickly.
May 17 21:05:52 ip-10-2-112-10 systemd[1]: Failed to start AWS X-Ray Daemon.

EC2 Windows Server - Unable to start service

I'm trying to run X-Ray daemon as a service on AWS EC2 Windows Server but I can't.

I've received the error as below:

Unable to retrieve the region from the EC2 instance EC2MetadataRequestError: failed to get EC2 instance identity document caused by: RequestError: send request failed caused by: Get http://169.254.169.254/latest/dynamic/instance-identity/document: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

I've tried to add region to the configuration file but then I see the error: [Error] Cannot fetch region variable from config file, environment variables and ec2 metadata.

Finally, I've added new system variable AWS_REGION but now I can see another error as below:

2020-11-04T15:37:31Z [Error] Get instance id metadata failed: RequestError: send request failed caused by: Get http://169.254.169.254/latest/meta-data/instance-id: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

SerializationError: failed decoding REST JSON error response

I'm running AWS X-Ray daemon version 3.0.0 on my Windows 7 machine with the following options...

./xray_windows.exe -o -n us-east-1 -l debug

... and see SerializationErrors such as the following during idle periods.

2019-02-22T13:38:03-05:00 [Debug] Failed to send telemetry 1 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response
caused by: invalid character '<' looking for beginning of value
2019-02-22T13:39:04-05:00 [Debug] Send 2 telemetry record(s)
2019-02-22T13:40:04-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:41:04-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:42:04-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:43:04-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:44:04-05:00 [Debug] Failed to send telemetry 1 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response
caused by: invalid character '<' looking for beginning of value
2019-02-22T13:45:04-05:00 [Debug] Send 2 telemetry record(s)
2019-02-22T13:46:04-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:47:04-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:48:05-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:49:05-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:50:05-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:51:05-05:00 [Debug] Failed to send telemetry 1 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response
caused by: invalid character '<' looking for beginning of value
2019-02-22T13:52:05-05:00 [Debug] Send 2 telemetry record(s)
2019-02-22T13:53:05-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:54:05-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:55:06-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:56:06-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:57:06-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:58:06-05:00 [Debug] Send 1 telemetry record(s)
2019-02-22T13:59:06-05:00 [Debug] Failed to send telemetry 1 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response
caused by: invalid character '<' looking for beginning of value
2019-02-22T14:00:06-05:00 [Debug] Failed to send telemetry 2 record(s). Re-queue records. SerializationError: failed decoding REST JSON error response
caused by: invalid character '<' looking for beginning of value
2019-02-22T14:01:06-05:00 [Debug] Send 3 telemetry record(s)

Oddly enough, this doesn't seem to interfere with tracing. Is there some way to get more information?! Is this something that I should not worry about (since it's a debug message)?

X-Ray Service map not showing call from one micro service to another AWS EKS Fargate

I have a AWS EKS Fargate setup and have an ingress Gateway setup that makes call to two micro services. The service are Pemissions and Service-Providers.

X-Ray Service Map does show the calls are made correctly.

image

So, the call goes from Virtual gateway to permissions and service-providers and is shown on service map.

However, there is another call that happens between these two services themselves that is not routed from the gateway, where service providers contacts permissions to find out whether the user who made the call has permissions to get that data or not depending on his/her auth. token.

This particular call does not shows up in service map. As you can see there is no line connecting the two services. Although in traces I do see that call

image

The permissions service is referenced as http://ganesh-permissions in the service providers service. What am I missing?

Ingress Gateway X-ray logs

2020-10-12T17:21:51Z [Info] Successfully sent batch of 1 segments (0.004 seconds)
2020-10-12T17:21:52Z [Info] Successfully sent batch of 1 segments (0.004 seconds)
2020-10-12T17:21:54Z [Info] Successfully sent batch of 2 segments (0.013 seconds)
2020-10-12T17:21:57Z [Info] Successfully sent batch of 1 segments (0.007 seconds)
2020-10-12T17:22:04Z [Info] Successfully sent batch of 1 segments (0.006 seconds)
2020-10-12T17:22:06Z [Info] Successfully sent batch of 1 segments (0.014 seconds)
2020-10-12T17:22:07Z [Info] Successfully sent batch of 1 segments (0.016 seconds)
2020-10-12T17:22:09Z [Info] Successfully sent batch of 2 segments (0.013 seconds)
2020-10-12T17:22:12Z [Info] Successfully sent batch of 1 segments (0.014 seconds)
2020-10-12T17:22:19Z [Info] Successfully sent batch of 1 segments (0.004 seconds)
2020-10-12T17:22:21Z [Info] Successfully sent batch of 1 segments (0.004 seconds)
2020-10-12T17:22:22Z [Info] Successfully sent batch of 1 segments (0.013 seconds)
2020-10-12T17:22:24Z [Info] Successfully sent batch of 1 segments (0.004 seconds)

service-providers logs

2020-10-12T16:51:07Z [Info] Initializing AWS X-Ray daemon 3.2.0
2020-10-12T16:51:07Z [Info] Using buffer memory limit of 74 MB
2020-10-12T16:51:07Z [Info] 1184 segment buffers allocated
2020-10-12T16:51:07Z [Info] Using region: us-east-1
2020-10-12T16:52:08Z [Error] Get instance id metadata failed: RequestError: send request failed
caused by: Get http://169.254.169.254/latest/meta-data/instance-id: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
2020-10-12T16:52:08Z [Info] HTTP Proxy server using X-Ray Endpoint : https://xray.us-east-1.amazonaws.com
2020-10-12T16:52:08Z [Info] Starting proxy http server on 0.0.0.0:2000
2020-10-12T16:52:10Z [Info] Successfully sent batch of 1 segments (0.285 seconds)
2020-10-12T16:52:10Z [Info] Successfully sent batch of 1 segments (0.014 seconds)
2020-10-12T17:05:53Z [Info] Successfully sent batch of 1 segments (0.012 seconds)
2020-10-12T17:14:02Z [Info] Successfully sent batch of 1 segments (0.016 seconds)

permissions logs

2020-10-15T05:06:48Z [Info] Successfully sent batch of 50 segments (0.010 seconds)
2020-10-15T05:06:48Z [Info] Successfully sent batch of 50 segments (0.012 seconds)
2020-10-15T05:06:48Z [Info] Successfully sent batch of 50 segments (0.010 seconds)
2020-10-15T05:06:48Z [Info] Successfully sent batch of 50 segments (0.010 seconds)
2020-10-15T05:06:48Z [Info] Successfully sent batch of 50 segments (0.011 seconds)
2020-10-15T05:06:49Z [Info] Successfully sent batch of 50 segments (0.009 seconds)
2020-10-15T05:06:49Z [Info] Successfully sent batch of 50 segments (0.010 seconds)
2020-10-15T05:06:49Z [Info] Successfully sent batch of 50 segments (0.011 seconds)
2020-10-15T05:06:49Z [Info] Successfully sent batch of 50 segments (0.011 seconds)
2020-10-15T05:06:49Z [Info] Successfully sent batch of 50 segments (0.008 seconds)
2020-10-15T05:06:49Z [Info] Successfully sent batch of 50 segments (0.010 seconds)
2020-10-15T05:06:49Z [Info] Successfully sent batch of 50 segments (0.010 seconds)
2020-10-15T05:06:50Z [Info] Successfully sent batch of 50 segments (0.010 seconds)
2020-10-15T05:06:50Z [Info] Successfully sent batch of 50 segments (0.014 seconds)
2020-10-15T05:06:50Z [Info] Successfully sent batch of 50 segments (0.011 seconds)
2020-10-15T05:06:50Z [Info] Successfully sent batch of 50 segments (0.012 seconds)
2020-10-15T05:06:50Z [Info] Successfully sent batch of 50 segments (0.009 seconds)
2020-10-15T05:06:50Z [Info] Successfully sent batch of 50 segments (0.008 seconds)
2020-10-15T05:06:51Z [Info] Successfully sent batch of 50 segments (0.009 seconds)
2020-10-15T05:06:51Z [Info] Successfully sent batch of 50 segments (0.010 seconds)

This is how the services are configured

apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualGateway
metadata:
  name: ingress-gw
  namespace: dev
spec:
  namespaceSelector:
    matchLabels:
      gateway: ingress-gw
  podSelector:
    matchLabels:
      app: ingress-gw
  listeners:
    - portMapping:
        port: 8088
        protocol: http
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: GatewayRoute
metadata:
  name: ganesh-permissions
  namespace: dev
spec:
  httpRoute:
    match:
      prefix: "/ganesh-permissions/dev"
    action:
      target:
        virtualService:
          virtualServiceRef:
            name: ganesh-permissions
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
  name: ganesh-permissions
  namespace: dev
spec:
  awsName: ganesh-permissions.dev.svc.cluster.local
  provider:
    virtualRouter:
      virtualRouterRef:
        name: ganesh-permissions
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
  namespace: dev
  name: ganesh-permissions
spec:
  listeners:
    - portMapping:
        port: 80
        protocol: http
  routes:
    - name: ganesh-permissions-route
      priority: 10
      httpRoute:
        match:
          prefix: /
        action:
          weightedTargets:
            - virtualNodeRef:
                name: ganesh-permissions-vnode
              weight: 1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: GatewayRoute
metadata:
  name: service-providers
  namespace: dev
spec:
  httpRoute:
    match:
      prefix: "/service-providers/dev"
    action:
      target:
        virtualService:
          virtualServiceRef:
            name: service-providers
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
  name: service-providers
  namespace: dev
spec:
  awsName: service-providers.dev.svc.cluster.local
  provider:
    virtualRouter:
      virtualRouterRef:
        name: service-providers
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
  namespace: dev
  name: service-providers
spec:
  listeners:
    - portMapping:
        port: 80
        protocol: http
  routes:
    - name: service-providers-route
      priority: 10
      httpRoute:
        match:
          prefix: /
        action:
          weightedTargets:
            - virtualNodeRef:
                name: service-providers-vnode
              weight: 1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
  name: ganesh-permissions-vnode
  namespace: dev
spec:
  podSelector:
    matchLabels:
      app: ganesh-permissions
  listeners:
    - portMapping:
        port: 80
        protocol: http
  serviceDiscovery:
    dns:
      hostname: ganesh-permissions.dev.svc.cluster.local
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
  name: service-providers-vnode
  namespace: dev
spec:
  podSelector:
    matchLabels:
      app: service-providers
  listeners:
    - portMapping:
        port: 80
        protocol: http
  backends:
    - virtualService:
        virtualServiceRef:
          name: ganesh-permissions
  serviceDiscovery:
    dns:
      hostname: service-providers.dev.svc.cluster.local
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: worklink
  namespace: dev
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internal
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: ingress-gw
          servicePort: 80
        path: /*
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-gw
  namespace: dev
spec:
  ports:
  - port: 80
    targetPort: 8088
    protocol: TCP
  type: NodePort
  selector:
    app: ingress-gw
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-gw
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-gw
  template:
    metadata:
      labels:
        app: ingress-gw
    spec:
      containers:
        - name: envoy
          image: 840364872350.dkr.ecr.region-code.amazonaws.com/aws-appmesh-envoy:v1.15.1.0-prod
          ports:
            - containerPort: 8088
      serviceAccountName: worklink-dev-sa
      securityContext:
        fsGroup: 65534
---

Lambda in VPC

If I understand correctly the X-Ray daemon is provisioned automatically for Lambda function.

Is this true for VPC based Lambdas?

I am having the following issue with a VPC based Lambda which uses X-Ray:

Error: connect ECONNREFUSED 169.254.79.2:2000
    at Object._errnoException (util.js:1022:11)
    at _exceptionWithHostPort (util.js:1044:20)
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1198:14)

Provide statically linked binary (CGO_ENABLED=0) to allow smaller docker containers.

I would like to reduce the size of my xray daemon docker container.

By compiling the executable myself with "CGO_ENABLED=0" i was able to reduce the size to 12mb.
This is because if the binary already contains all necessary libraries, i do not need a base image to start from. So the docker image only needs to contain the binary and a ca-bundle.

It would be really helpful if you could provide an executable compiled like that with the other executables.

Currently my Dockerfile looks like this:

# source for ca-bundle so we can do https requests
FROM amazonlinux:2 as trusted

# build statically linked binary
FROM golang:1.11 as builder
RUN go get -d -u github.com/aws/aws-xray-daemon/...
RUN cd ${GOPATH}/src/github.com/aws/aws-xray-daemon && CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o build/xray ./daemon/daemon.go ./daemon/tracing.go

# start a container from scratch containing only necessary files
FROM scratch
COPY --from=trusted /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
COPY --from=builder /go/src/github.com/aws/aws-xray-daemon/build/xray /xray
EXPOSE 2000
ENTRYPOINT ["/xray","--bind=0.0.0.0:2000"]

If the binary i need was available (e.g. as aws-xray-daemon-linux-bundle-3.x.zip) i could reduce the Dockerfile to:

FROM amazonlinux:2 as builder
RUN yum install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-bundle-3.x.zip
RUN unzip daemon.zip && cp xray /xray

# start a container from scratch containing only necessary files
FROM scratch
COPY --from=builder /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
COPY --from=builder /xray /xray

X-Ray daemon not sending traces to X-Ray service

I am trying to run a the x-ray daemon inside a docker container

this is my docker-compose.yaml

version: '3'

services:
//Other services

  app:
    build: ./
    volumes:
      - ./:/var/www/app
    ports:
      - 8080:8080
    links:
      - redis
    depends_on:
      - db
    command:
      sh -c 'npm i && npm run start'

  xray:
    build:
      context: ./
      dockerfile: Dockerfile.xray
    ports:
    - 2000:2000/udp
    - 2000:2000/tcp
    environment:
      - AWS_REGION=ap-southeast-1

Docker file for the xray image

FROM amazonlinux
RUN yum install -y unzip
RUN curl -o daemon.zip https://s3.dualstack.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
ENTRYPOINT ["/usr/bin/xray", "-t", "0.0.0.0:2000", "-b", "0.0.0.0:2000"]
EXPOSE 2000/udp
EXPOSE 2000/tcp

my instrumented nodejs app

const AWSXRay = require('aws-xray-sdk')
app.use(AWSXRay.express.openSegment('my-backend'))
// all the routes here
// ...

app.use(AWSXRay.express.closeSegment())

Those are the docker-compose logs

xray_1   | 2020-03-03T06:16:28Z [Info] Initializing AWS X-Ray daemon 3.2.0
xray_1   | 2020-03-03T06:16:28Z [Info] Using buffer memory limit of 9 MB
xray_1   | 2020-03-03T06:16:28Z [Info] 144 segment buffers allocated
xray_1   | 2020-03-03T06:16:28Z [Info] Using region: ap-southeast-1

xray_1   | 2020-03-03T06:30:27Z [Info] HTTP Proxy server using X-Ray Endpoint : https://xray.ap-southeast-1.amazonaws.com
xray_1   | 2020-03-03T06:30:27Z [Info] Starting proxy http server on 0.0.0.0:2000

I have a IAM role attached to my EC2 with AWSXrayFullAccess policy.

Obviously I am sending a bunch of requests, but still I do not see anything anything in the X-Ray console as well as the [Info] Successfully sent batch of 1 segments in the docker-compose logs

Any idea?
Thanks

EDIT:
I have tried to send manually a segment with cat segment.txt > /dev/udp/127.0.0.1/2000 and it worked

xray_1   | 2020-03-03T09:22:52Z [Info] Successfully sent batch of 1 segments (0.020 seconds)

So I must have done something wrong with the SDK

Does aws-xray-daemon support SamplingRules?

Hi there,

We are using xray-daemon with App Mesh in ECS (Fargate) as instructed in https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html#xray-daemon-ecs-build.

Healthcheck counts for most of the x-ray traces in our dev environment so we'd like to use Sampling Rules to exclude them, but xray-daemon is NOT loading sampling rules from X-ray console, attached screenshot is ECS logs for xray-daemon.

image

I thought about baking a config file into a Docker image and then configure the xray-daemon to use it. But after having a look at the code, xray-daemon does NOT seem to configure sampling rules at all.

Does this mean we must use xray-sdk to instrument our applications if we'd like to exclude health check paths from xray tracing?

Thank you!

Best regards,
Arthur

Consuming high amounts of memory?

Hiya, team!

We're running the daemon on our servers and are seeing extraordinarily high memory usage. I'm currently seeing one server with 20% (800M), and I've seen other servers with 50% (I don't recall if that was a 2G or 4G instance, but either way: 1-2G of memory consumed).

We're running the default configuration:

# Maximum buffer size in MB (minimum 3). Choose 0 to use 1% of host memory.
TotalBufferSizeMB: 0
# Maximum number of concurrent calls to AWS X-Ray to upload segment documents.
Concurrency: 8
# Send segments to AWS X-Ray service in a specific region
Region: ""
# Change the X-Ray service endpoint to which the daemon sends segment documents.
Endpoint: ""
Socket:
  # Change the address and port on which the daemon listens for UDP packets containing segment documents.
  UDPAddress: "127.0.0.1:2000"
Logging:
  LogRotation: true
  # Change the log level, from most verbose to least: dev, debug, info, warn, error, prod (default).
  LogLevel: "prod"
  # Output logs to the specified file path.
  LogPath: ""
# Turn on local mode to skip EC2 instance metadata check.
LocalMode: false
# Amazon Resource Name (ARN) of the AWS resource running the daemon.
ResourceARN: ""
# Assume an IAM role to upload segments to a different account.
RoleARN: ""
# Disable TLS certificate verification.
NoVerifySSL: false
# Upload segments to AWS X-Ray through a proxy.
ProxyAddress: ""
# Daemon configuration file format version.

I'm unsure of how to debug this. Any ideas?

Lambda config support

It's not clear to me how this daemon works within Lambda, but would adding something like /var/task/.xray/cfg.yaml here make it possible to configure e.g. sending X-Ray traces from Lambda to a different account?

Adding support for ECS_ENABLE_CONTAINER_METADATA and ECS_CONTAINER_METADATA_FILE

Hello,

I have been contacting AWS Support on my company's AWS account about this, but I'm using my personal Github profile for the request.
We are using the aws-xray-daemon as a side container on ECS, and we use bridged networking. The ECS instances under the hood are using the iptables rule suggested on https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html, which prevents the daemon from accessing the region through the metadata.
We could provide the AWS_REGION as environment variable to all the task definitions, but I feel like a solution across the board would be best.

I've looked into using ECS_ENABLE_CONTAINER_METADATA and ECS_CONTAINER_METADATA_FILE from https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-metadata.html and AWS Support agrees it would be the best option.
I'd be happy to work on a pull request, if you agree it's something you are willing to accept.

Regards,
Mauro

Only log unprocessed segments instead of entire batch

The unprocessed segments response contains IDs of segments that failed to be processed. We currently have two problems with logging of unprocessed segments

  1. We log the same batch multiple times for each unprocessed segment
  2. We log processed segments too. We may as well only log unprocessed segments by comparing the IDs

Windows could not start the AWSXrayDaemon service on Local Computer

I get this error when trying to setup and start xray daemon on local Windows 7:

Error 1067: The process terminated unexpectedly

Logs :

2018-10-10T09:55:41+02:00 [Info] Initializing AWS X-Ray daemon 3.0.0
2018-10-10T09:55:41+02:00 [Info] Using buffer memory limit of 1309 MB
2018-10-10T09:55:41+02:00 [Info] 20944 segment buffers allocated
2018-10-10T09:55:41+02:00 [Error] Unable to retrieve the region from the EC2 instance RequestError: send request failed
caused by: Get http://169.254.169.254/latest/meta-data/placement/availability-zone: dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.

2018-10-10T09:55:41+02:00 [Error] Cannot fetch region variable from config file, environment variables and ec2 metadata.

This is my modified cfg.yaml :

# Maximum buffer size in MB (minimum 3). Choose 0 to use 1% of host memory.
TotalBufferSizeMB: 0
# Maximum number of concurrent calls to AWS X-Ray to upload segment documents.
Concurrency: 8
# Send segments to AWS X-Ray service in a specific region
Region: "eu-west-1"
# Change the X-Ray service endpoint to which the daemon sends segment documents.
Endpoint: ""
Socket:
  # Change the address and port on which the daemon listens for UDP packets containing segment documents.
  UDPAddress: "127.0.0.1:2000"
  # Change the address and port on which the daemon listens for HTTP requests to proxy to AWS X-Ray.
  TCPAddress: "127.0.0.1:2000"
Logging:
  LogRotation: true
  # Change the log level, from most verbose to least: dev, debug, info, warn, error, prod (default).
  LogLevel: "prod"
  # Output logs to the specified file path.
  LogPath: ""
# Turn on local mode to skip EC2 instance metadata check.
LocalMode: true
# Amazon Resource Name (ARN) of the AWS resource running the daemon.
ResourceARN: ""
# Assume an IAM role to upload segments to a different account.
RoleARN: ""
# Disable TLS certificate verification.
NoVerifySSL: true
# Upload segments to AWS X-Ray through a proxy.
ProxyAddress: ""
# Daemon configuration file format version.
Version: 2

I also have set the env var AWS_DEFAULT_REGION

Daemon starts before network is available

We're deploying the daemon onto CentOS instances are are seeing a recurring issue where the daemon attempts to start before networking is available.

The daemon fails because it cannot access the metadata service to obtain the current region:

2018-04-16T14:56:05Z [Info] Initializing AWS X-Ray daemon 2.1.0
2018-04-16T14:56:05Z [Info] Using buffer memory limit of 19 MB
2018-04-16T14:56:05Z [Info] 304 segment buffers allocated
2018-04-16T14:56:05Z [Error] Unable to retrieve the region from the EC2 instance RequestError: send request failed
caused by: Get http://169.254.169.254/latest/meta-data/placement/availability-zone: dial tcp 169.254.169.254:80: connect: network is unreachable
2018-04-16T14:56:05Z [Error] Cannot fetch region variable from config file, environment variables and ec2 metadata.

A service xray start on an instance works as expected.

Are you aware of this and do you have any suggestions?

Cannot start the x-ray daemon on Elastic Beanstalk host.

I'm running a Single Docker Elastic Beanstalk environment. I'm trying to run the X-Ray daemon by setting the following configuration

commands:
  01-stop-tracing:
    command: yum remove -y xray
    ignoreErrors: true
  02-copy-tracing:
    command: curl https://s3.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-3.x.rpm -o /home/ec2-user/xray.rpm
  03-start-tracing:
    command: yum install -y /home/ec2-user/xray.rpm

files:
  "/opt/elasticbeanstalk/tasks/taillogs.d/xray-daemon.conf" :
    mode: "000644"
    owner: root
    group: root
    content: |
      /var/log/xray/xray.log
  "/etc/amazon/xray/cfg.yaml" :
    mode: "000644"
    owner: root
    group: root
    content: |
      Logging:
        LogLevel: "debug"

However, this is what I see in the logs:

[Error] Config Version Setting is not correct. Use X-Ray Daemon Config Migration Script to update the config file. Please refer to AWS X-Ray Documentation for more information.

I've also tried using the manual binary with the default cfg.yaml and surprise surprise, works inside the container but not outside, in the Beanstalk host.

What am I doing wrong? I've tried looking the error up on Google to no avail.

When I run the program, the following error appears

{"level":"info","msg":"Couldn't describe resources for region us-east-2: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\n","time":"2021-01-23T17:25:48Z"}

Multiple integration support

Goal

Currently, the X-Ray daemon sends data to the AWS X-Ray service. This issue discusses the changes to be implemented to the existing design that supports multiple backends apart from the X-Ray service

Current design

The X-Ray daemon receives segments on the X-Ray daemon address. Each received segment has a daemon header. The current design utilizes a global memory pool known as buffer pool, (preallocated on initialization, default 1% of total memory) for receiving the UDP payload. A Ring buffer (RB) is a structure implemented using a channel and stores received segments using a goroutine. The size of the RB is 250 segments and each segment in the RB maintains a pointer to a piece of buffer allocated in the buffer pool. By default the buffer size is 64KB and we do not split large payload into multiple buffers. A Processor is on the receiver end of this RB channel and batches segments using a goroutine. A batch is ready to be sent by the processor to a Batch Processor, if it is large enough (default: 50 segments) or the processor goroutine has hit an idle timeout (default: 1 second), upon which the raw payload for the batch is serialized to strings and the buffer is returned to the buffer pool for reuse. The batch processor uses X-Ray client and sends batches to the X-Ray service using the PutTraceSegments API.

Modularization

We intend to decouple components of the X-Ray daemon, so the segments batched by the X-Ray daemon can be routed to the desired backend service. The changes to the design are backward compatible and support the X-Ray service by default.

Client

We create a X-Ray client instance to use the PutTraceSegments API that sends data to the X-Ray service. The X-Ray client implements XRay interface which contains X-Ray service API methods. We will have another interface Service (name yet to be finalized) which contains PutSegments() method. A Client structure will implement the Service interface for the desired backend service. The Client will be a bridge between the X-Ray daemon and the backend service.

Registering Client

In the current design, during initialization of the Processor instance, the X-Ray client is created and set to the Batch Processor instance. When the batch of segments is ready to be sent, the Batch Processor instance uses the X-Ray client to send data to the X-Ray service. This part needs to be restructured and the Client/ X-Ray client will be created as a part of daemon initialization and passed to the Processor instance. Once the Batch Processor instance is configured with the Client/ X-Ray client, existing architecture will send the batch of segments to the configured backend service.

Note : These are initial thoughts on modularizing the X-Ray daemon. Your suggestions are welcome.

Docker on Elastic Beanstalk Not Connecting

Hi,

I've been searching a ton for a solution to this problem, but no luck tracking down my issue.

I'm using Java and Spring Boot inside a docker container on elastic beanstalk. I added the code to expose the ports for xray.

EXPOSE 2000/tcp
EXPOSE 2000/udp

I added my ebextensions config file with this:

Files:
"/opt/elasticbeanstalk/tasks/taillogs.d/xray-daemon.conf": mode: "000644" owner: root group: root content: | /var/log/xray/xray.log "/etc/amazon/xray/cfg.yaml": mode: "000644" owner: root group: root content: | Logging: LogLevel: "debug" Version: 2 "/etc/awslogs/config/xray.conf": mode: "000755" owner: root group: root content: | [xray-log] log_group_name=/aws/elasticbeanstalk/llitd/docker-xray log_stream_name={instance_id} file=/var/log/xray/*.log file_fingerprint_lines=2-5

container_commands:
01_stop_tracing: command: yum remove -y xray ignoreErrors: true 02_copy_tracing: command: curl https://s3.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-3.x.rpm -o /home/ec2-user/xray.rpm 03_start_tracing: command: yum install -y /home/ec2-user/xray.rpm

and in my dockerrun.aws file i have opened this port:

{
  "ContainerPort": "2000",
  "HostPort": "2000"
}

In the log file for my xray stream I see:

[Info] Initializing AWS X-Ray daemon 3.2.0
[Debug] Listening on UDP 127.0.0.1:2000
  [Info] Using buffer memory limit of 38 MB
[Info] 608 segment buffers allocated
  [Debug] Using proxy address:
  [Debug] Fetch region us-east-1 from ec2 metadata
  [Info] Using region: us-east-1
  [Debug] ARN of the AWS resource running the daemon:
  [Debug] Using ip-172-31-41-65.ec2.internal hostname for telemetry records
  [Debug] Using i-0b2a13c006e33014f Instance Id for Telemetry records
  [Debug] Using Endpoint: https://xray.us-east-1.amazonaws.com
 [Debug] Telemetry initiated
  [Info] HTTP Proxy server using X-Ray Endpoint : https://xray.us-east-1.amazonaws.com
  [Debug] Using Endpoint: https://xray.us-east-1.amazonaws.com
  [Debug] Batch size: 50
  [Info] Starting proxy http server on 127.0.0.1:2000
  [Debug] Skipped telemetry data as no segments found
  [Debug] Skipped telemetry data as no segments found

And it just keeps with skipping telemetry at the end:

In the docker log file I see tons of UDP Emitters running.

DEBUG c.a.x.e.UDPEmitter { "name" : "writeBehind", "id" : "0ab3649415ffbd7d", "start_time" : 1.600271771496E9, "trace_id" : "1-5f62359b-31d6ea0f342328bac438f7b8", "end_time" : 1.600271771497E9, "aws" : { "xray" : { "sdk_version" : "2.7.1", "sdk" : "X-Ray for Java" } }, "metadata" : { "ClassInfo" : { "Class" : "StoryCardCacheService" } }, "service" : { "runtime" : "OpenJDK 64-Bit Server VM", "runtime_version" : "14.0.2" }
  | 2020-09-16T11:56:11.717-04:00 | }
 
DEBUG c.a.x.e.UDPEmitter Sending UDP packet.

And earlier when it first loaded it was:

  DEBUG c.a.x.c.DaemonConfiguration TCPAddress is set to 127.0.0.1:2000.
DEBUG c.a.x.c.DaemonConfiguration UDPAddress is set to 127.0.0.1:2000.

So I know that the docker container via Spring AOP is sending to 127.0.0.1:2000 and I know that the docker daemon is running on elastic beanstalk and listening UDP on the same address, but for some reason they aren't actually talking to each other. I'm sure I'm missing something really simple I'm just out of ideas about what to try to get this working.

Replace logging library with zap

We currently use seelog which isn't really maintained and seems really slow, having a global lock on every log command, even disabled ones.

zap is popular in most go projects I've seen since it's very fast and full featured.

Must be run with elevated privileges

On Linux aws-xray-daemon tries implicitly to gain access to file: /etc/amazon/xray/cfg.yaml , even with -c option. However, this file is not in a user tree hence it requires sudo access. Related log entry:
Error] Error occur when using config flag: open /etc/amazon/xray/cfg.yaml: permission denied

I can't connect x-ray console via x-ray daemon by docker

Dockerfile
FROM amazonlinux RUN yum install -y unzip RUN curl -o daemon.zip https://s3.dualstack.ap-southeast-2.amazonaws.com/aws-xray-assets.ap-southeast-2/xray-daemon/aws-xray-daemon-linux-3.x.zip RUN unzip daemon.zip && cp xray /usr/bin/xray ENTRYPOINT ["/usr/bin/xray", "-t", "0.0.0.0:2000", "-b", "0.0.0.0:2000"] EXPOSE 2000/udp EXPOSE 2000/tcp
...
docker run \ --attach STDOUT \ -v ~/.aws/:/root/.aws/:ro \ --net=host \ -e AWS_REGION=ap-southeast-2 \ --name xray-daemon \ -p 2000:2000/udp \ xray-daemon -o

I have been set up my aws access key and aws secret access key in credential of .aws folder

The result of docker running is

Then go to https://xray.ap-southeast-2.amazonaws.com/ on browser and got this error
<MissingAuthenticationTokenException> <Message>Missing Authentication Token</Message> </MissingAuthenticationTokenException>

My app is running normally . I still see X-Amzn-Trace-Id: Root=1-5dd25d16-b57eba6c376e7395d1645b16; in header of response but don't push them to x-ray console.

How to fix them. What's wrong in setting up x-ray daemon,.... ? Please help me.
Test1
Test2

Definition of "spilling over"

We are seeing a lot of Spilling over 50 segments messages in the logs, but it's not that clear what this means. Does this just mean there are > 50 segments in the queue, and they'll be sent next request, or are those segments being dropped entirely and never sent upstream?

Support TCP only Segment document transfers

In the organization my team and I work in, our cloud and security admins have given us the following stipulations for implementing tracing:

  1. We must use standalone EC2 instances, no ECS clusters, etc (Until we completely migrate to Fargate).
  2. We should only have one app running per EC2 instance, no further containers, no daemon sidecar.
  3. We can’t bridge between EC2 instances (container to container communication is explicitly disabled)
  4. Our organization registry/gateway services must be used for authentication and communication between apps (including the daemon) and they can only support TCP.

Is there any thought to making this a configurable option or something similar? I should add that our organization has a quite large enterprise ecosystem, hence the reason we are trying to begin the process for instrumenting our apps with x-ray.

Rename the "daemon" directory to "xray"

When installing the daemon using go get, the binary gets installed in the $GOPATH/bin directory with the name daemon. A lot of Go users add that directory to their PATH, but daemon isn't a super descriptive command. I think it should be renamed to something like xray.

X-Ray Sampling Rules :: Error reading response

Issue

we are getting an error when X-Ray executes the Sampling Rules.

Current Behavior

this happens every minute when our app is running. The problem occurs when processing the 'createdAt' of class SamplingRuleRecord present in GetSamplingRulesResult.
We think this may have started to happen when we started using version 2.6.0 of X-Ray SDK.

Application Error

/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 2020-06-20T16:31:09Z [Info] Successfully sent batch of 1 segments (0.015 seconds)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 20-Jun-2020 16:31:15.531 INFO [pool-5-thread-1] com.amazonaws.xray.strategy.sampling.pollers.RulePoller.pollRule Polling sampling rules.
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 20-Jun-2020 16:31:15.558 SEVERE [pool-5-thread-1] com.amazonaws.xray.strategy.sampling.pollers.RulePoller.lambda$start$0 Encountered error polling GetSamplingRules:
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 com.amazonaws.xray.internal.XrayClientException: Error reading response.
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.amazonaws.xray.internal.UnsignedXrayClient.sendRequest(UnsignedXrayClient.java:136)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.amazonaws.xray.internal.UnsignedXrayClient.getSamplingRules(UnsignedXrayClient.java:88)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.amazonaws.xray.strategy.sampling.pollers.RulePoller.pollRule(RulePoller.java:100)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.amazonaws.xray.strategy.sampling.pollers.RulePoller.lambda$start$0(RulePoller.java:72)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at java.base/java.lang.Thread.run(Thread.java:834)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of java.util.Date out of VALUE_NUMBER_FLOAT token
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 38] (through reference chain: com.amazonaws.services.xray.model.GetSamplingRulesResult["SamplingRuleRecords"]->java.util.ArrayList[0]->com.amazonaws.services.xray.model.SamplingRuleRecord["CreatedAt"])
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:59)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1464)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1238)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1148)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.std.StdDeserializer._parseDate(StdDeserializer.java:511)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.std.DateDeserializers$DateBasedDeserializer._parseDate(DateDeserializers.java:200)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.std.DateDeserializers$DateDeserializer.deserialize(DateDeserializers.java:290)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.std.DateDeserializers$DateDeserializer.deserialize(DateDeserializers.java:273)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:293)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:156)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:285)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:27)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:293)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:156)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4482)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3471)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 at com.amazonaws.xray.internal.UnsignedXrayClient.sendRequest(UnsignedXrayClient.java:134)
/ecs/IndexerTask ecs/IndexerContainer/186e60110e7749b3b35d8ac761d56ea5 ... 9 more
/ecs/IndexerTask ecs/IndexerContainer/186

Environment

AWS Java SDK version used: 2.13.40
AWS X-Ray SDK version used: 2.6.0
Jackson version used: 2.11.0
JDK version used: jdk11-corretto
Operating System and version: docker tomcat:9-jdk11-corretto
Webapp build and deploy to ECS Fargate Cluster with AWS Code Pipeline

Single X-ray Daemon for Multiple Containers

Hi Experts,

Currently we have 50+ containers running on ECS cluster, each container has it's own side-car xray containers. In that case for 50 application containers we end up running 100~ containers(app + xray). Is there any way that we can have single xray daemon per ECS host will help us in trace all the logs/request from all the running application containers from the particular ECS host. so that we can avoid using side-car container for each app. containers. Please let me know if you more information on this

Appreciate your help on this!!!

Suggestion: Guide on managing XRay as a separate service

I stumbled on issues #24 and #53 while Googling to help me do the same tasks. I was still a little unclear on the process surrounding setting up ECS as a separate service and have other microservices interact with it. So, a setup design guide along the same lines would be incredibly helpful in adoption as it's a relatively new but exciting feature. Feel free to close the issue if it seems irrelevant.

Single XRay Daemon in ECS Cluster not sending traces

Hello,

Just like #24 I am trying to deploy a single xray daemon to serve every service in my ECS Cluster.

If I deploy the agent as a container in the service everything works fine. If I try to deploy it as a separate service (behind Load Balancer and tied to Route53) my services are unable to send segments to the daemon.

I can correctly see

[Debug] Send xx telemetry record(s)

when the agent is inside the service configured to have AWS_XRAY_DAEMON_ADDRESS=xray-agent:2000

But if I try to reach it outside the service as a separate microservice I only get

[Debug] Skipped telemetry data as no segments found

using AWS_XRAY_DAEMON_ADDRESS=xray.myenvironment.mydomain:2000

Consider that that host is reachable from my local machine, from inside the ec2 host and from inside the specific container. So it's not a network issue.

The task has the same IAM policy attached and the security group allows all my vpc to reach port 2000 via udp/tcp

 - PolicyName: xray-writeonly
          PolicyDocument:
            Statement:
              - Action:
                  - "xray:PutTraceSegments"
                  - "xray:PutTelemetryRecords"
                Effect: "Allow"
                Resource:
                  - "*"

And this is the CloudFormation template

- Name: xray-agent
          Essential: true
          Image: amazon/aws-xray-daemon
          Cpu: 32
          Command:
            - --log-level=dev
          Memory: 64
          PortMappings:
            - ContainerPort: 2000
              HostPort: 0
              Protocol: udp
          Environment:
            - Name: AWS_DEFAULT_REGION
              Value: !Ref "AWS::Region"
            - Name: AWS_REGION
              Value: eu-west-1
            - Name: AWS_SDK_LOAD_CONFIG
              Value: "1"
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-group: !Ref AWS::StackName
              awslogs-region: !Ref AWS::Region
              awslogs-stream-prefix: "xray-agent"

Considering it's not outputting any error log it's difficult for me to debug this.

thanks

Failed to send telemetry 6 record(s). Re-queue records. NoCredentialProviders: no valid providers in chain. Deprecated.

2020-05-10T05:05:13Z [Warn] Delaying sending of additional batches by 0 seconds
2020-05-10T05:05:14Z [Debug] processor: sending partial batch
2020-05-10T05:05:14Z [Debug] processor: segment batch size: 1. capacity: 50
2020-05-10T05:05:14Z [Error] Sending segment batch failed with: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-05-10T05:05:14Z [Warn] Delaying sending of additional batches by 0 seconds
2020-05-10T05:05:30Z [Debug] Failed to send telemetry 3 record(s). Re-queue records. NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-05-10T05:06:30Z [Debug] Failed to send telemetry 4 record(s). Re-queue records. NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-05-10T05:07:30Z [Debug] Failed to send telemetry 5 record(s). Re-queue records. NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-05-10T05:07:57Z [Debug] Received request on HTTP Proxy server : /GetSamplingRules
2020-05-10T05:07:57Z [Error] Unable to sign request: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-05-10T05:07:58Z [Debug] Received request on HTTP Proxy server : /GetSamplingRules
2020-05-10T05:07:58Z [Error] Unable to sign request: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-05-10T05:07:58Z [Debug] Received request on HTTP Proxy server : /GetSamplingRules
2020-05-10T05:07:58Z [Error] Unable to sign request: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-05-10T05:07:58Z [Debug] Received request on HTTP Proxy server : /GetSamplingRules
2020-05-10T05:07:58Z [Error] Unable to sign request: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2020-05-10T05:08:30Z [Debug] Failed to send telemetry 6 record(s). Re-queue records. NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors

Fix Log Levels

Right now, "prod" log level is set to info: https://github.com/aws/aws-xray-daemon/blob/master/pkg/logger/log_config.go

We should configure it to "Error" level, since we state in the docs that it's the least verbose. This is arguably a breaking change since prod is the default log level, but we can discuss whether that is truly the case given the current behavior is incorrect.

We should also add a "silent" log level that suppresses all log messages.

Related: #62

failed to load config file - unsupported expression

I am new to AWS and x-ray. I set up for myself a running instance of some cloud services, the app is working and x-ray logs. However, I'm trying to run the app locally on my workstation while capturing diagnostics to x-ray, and despite having successfully set up AWS profiles on my workstation and getting the key and secret for the root of the account I'm building for, I am getting the following error when I run this command on my local machine (for local workstation execution of an AWS app):

set AWS_PROFILE=myprofile
xray_windows.exe -o -n us-east-2
2019-12-11T14:04:14-07:00 [Info] Initializing AWS X-Ray daemon 3.2.0
2019-12-11T14:04:14-07:00 [Info] Using buffer memory limit of 161 MB
2019-12-11T14:04:14-07:00 [Info] 2576 segment buffers allocated
2019-12-11T14:04:14-07:00 [Error] Error in creating session object : SharedConfigLoadError: failed to load config file, C:\Users\Jon\.aws\credentials
caused by: INIParseError: unsupported expression {expr {1 STRING 0 [65533 65533]} true []}
.

The entire contents of the file located at C:\Users\Jon\.aws\credentials is (actual keys omitted here):

[myprofile]
aws_access_key_id = 01234
aws_secret_access_key = 1234

I also tried using [default] at the top of the file along with running set AWS_PROFILE=default, same error.

I presume the daemon is fine; so what am I doing wrong?

Unable to install X-Ray daemon by systemctrl in Ubuntu 18.04

Unable to install X-Ray daemon by systemctrl in Ubuntu 18.04

Steps to replicate:

wget https://s3.dualstack.eu-west-1.amazonaws.com/aws-xray-assets.eu-west-1/xray-daemon/aws-xray-daemon-3.x.deb

sudo dpkg -i aws-xray-daemon-3.x.deb

sudo systemctl enable xray

sudo systemctl status xray

Result:

Sep 17 06:01:20 ip-172-31-30-247 systemd[1]: xray.service: Main process exited, code=exited, status=1/FAILURE
Sep 17 06:01:20 ip-172-31-30-247 systemd[1]: xray.service: Failed with result 'exit-code'.
Sep 17 06:01:20 ip-172-31-30-247 systemd[1]: xray.service: Service hold-off time over, scheduling restart.
Sep 17 06:01:20 ip-172-31-30-247 systemd[1]: xray.service: Scheduled restart job, restart counter is at 6.
Sep 17 06:01:20 ip-172-31-30-247 systemd[1]: Stopped AWS X-Ray Daemon.
Sep 17 06:01:20 ip-172-31-30-247 systemd[1]: xray.service: Start request repeated too quickly.
Sep 17 06:01:20 ip-172-31-30-247 systemd[1]: xray.service: Failed with result 'exit-code'.
Sep 17 06:01:20 ip-172-31-30-247 systemd[1]: Failed to start AWS X-Ray Daemon.

Verbose log by journalctl -u xray:

[Error] Error occur when using config flag: open /etc/amazon/xray/cfg.yaml: permission denied

AWS XRay Daemon High CPU Usage

Last week we deployed the XRay Daemon as a daemon set to our EKS cluster and we have seen a marked increase in CPU usage. Our cluster nodes, which typically idle at 6-8% CPU, have been running at around 17-20%. Scaling the daemonset back to zero restored the node CPU back to around 6%. The image we are using is amazon/aws-xray-daemon:3.x (https://hub.docker.com/layers/amazon/aws-xray-daemon/3.x/images/sha256-0c5b0deb0332e28669059fa936662cb05cbe47864bdd91e9d26cb479df62cc1d?context=explore). I didn't see any errors in the XRay Daemon logs.

unable to connect Xray using Xray-daemon from a docker app

I'm running Xray daemon locally, and I wanted to push the data to Xray from my docker app using daemon AWSXRay.setDaemonAddress('127.0.0.1:2000').
Verified the daemon installation by running cat segment.txt > /dev/udp/127.0.0.1/2000 inside the daemon container and its working well.

2020-05-11T04:39:45Z [Debug] processor: sending partial batch
2020-05-11T04:39:45Z [Debug] processor: segment batch size: 1. capacity: 50
2020-05-11T04:39:46Z [Info] Successfully sent batch of 1 segments (1.653 seconds)
2020-05-11T04:40:19Z [Debug] Send 1 telemetry record(s)

I created a new network as Mynetwork and connected both the containers (app and daemon) to that. Even able to ping them
docker exec -ti Myapp ping daemon.

Tried passing IP address, docker ipv4 address (172.21.0.2), and name of the container as xray-daemon for host in AWSXRay.setDaemonAddress('host:2000') but nothing is happening neither an error nor any success in the logs.

Any suggestions to access daemon in my app?

Much Appreciate your help on this!!!

Unable to see X-RAY traces

Hi Experts,

I have configured x-ray daemon on aws EC2 as mentioned here https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ec2.html. I could see the xray service running on the ec2 instance. Below are the x-ray logs,
**$ /usr/bin/xray
2019-02-27T05:57:33Z [Info] Initializing AWS X-Ray daemon 2.1.3
2019-02-27T05:57:33Z [Error] listen udp 127.0.0.1:2000: bind: address already in use

$ cat /var/log/xray/xray.log
2019-01-30T15:21:10Z [Info] Initializing AWS X-Ray daemon 2.1.3
2019-01-30T15:21:10Z [Info] Using buffer memory limit of 160 MB
2019-01-30T15:21:10Z [Info] 2560 segment buffers allocated
2019-01-30T15:21:10Z [Info] Using region: ap-south-1**

OS INFO:
NAME="Amazon Linux AMI"
VERSION="2018.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2018.03"
PRETTY_NAME="Amazon Linux AMI 2018.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2018.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"

But still i don't see any traces in the x-ray console. EC2 instance has xray:* permission. Am i missing any configuration here.

Appreciate your valuable help on this!!!

Add health endpoint

Referring to the pinging the xray daemon forum post, it would be nice, if a health endpoint could be added for the X-Ray daemon.

Currently, we use a crontab bash script in the ECS launch configuration to check the status of the X-Ray daemon. Instead, we would like to use a health endpoint.

Support config for send batch throttling?

I'm getting:

Sending segment batch failed with: ThrottlingException: Rate exceeded

I'm running the daemon inside a container running as a AWS Batch task. Many of my requests send just fine. The source of the segments are coming from aws-xray-sdk. But this error is originating from the daemon.

I do have concurrent batch jobs using the same role, but I think this limit is a AWS account limit?

Is there away to tune the max retries or enable throttling on the daemon?

SignatureDoesNotMatch: Signature expired

I am runing on a windows with a .NET application.
When the x-ray try to send a data occur this error.

2019-12-17T10:22:42-02:00 [Info] Initializing AWS X-Ray daemon 3.2.0
2019-12-17T10:22:42-02:00 [Info] Using buffer memory limit of 20 MB
2019-12-17T10:22:42-02:00 [Info] 320 segment buffers allocated
2019-12-17T10:22:42-02:00 [Info] STS Endpoint : https://sts.sa-east-1.amazonaws.com
2019-12-17T10:22:42-02:00 [Info] Using region: sa-east-1
2019-12-17T10:23:43-02:00 [Error] Get instance id metadata failed: RequestError: send request failed
caused by: Get http://169.254.169.254/latest/meta-data/instance-id: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
2019-12-17T10:23:43-02:00 [Info] HTTP Proxy server using X-Ray Endpoint : https://xray.sa-east-1.amazonaws.com
2019-12-17T10:23:43-02:00 [Info] Starting proxy http server on 127.0.0.1:2000
2019-12-17T10:24:41-02:00 [Error] Sending segment batch failed with: SignatureDoesNotMatch: Signature expired: 20191217T122441Z is now earlier than 20191217T130945Z (20191217T132445Z - 15 min.)
status code: 403, request id: 9761b2b6-20d0-11ea-aa35-8137046d754e
2019-12-17T10:24:41-02:00 [Warn] Delaying sending of additional batches by 0 seconds
2019-12-17T10:24:43-02:00 [Error] Sending segment batch failed with: SignatureDoesNotMatch: Signature expired: 20191217T122443Z is now earlier than 20191217T130947Z (20191217T132447Z - 15 min.)
status code: 403, request id: 98955033-20d0-11ea-aa35-8137046d754e
2019-12-17T10:24:43-02:00 [Warn] Delaying sending of additional batches by 0 seconds
2019-12-17T10:24:44-02:00 [Error] Sending segment batch failed with: SignatureDoesNotMatch: Signature expired: 20191217T122444Z is now earlier than 20191217T130948Z (20191217T132448Z - 15 min.)
status code: 403, request id: 99303125-20d0-11ea-aa35-8137046d754e
2019-12-17T10:24:44-02:00 [Warn] Delaying sending of additional batches by 0 seconds
2019-12-17T10:24:54-02:00 [Error] Sending segment batch failed with: SignatureDoesNotMatch: Signature expired: 20191217T122454Z is now earlier than 20191217T130958Z (20191217T132458Z - 15 min.)
status code: 403, request id: 9f5362fa-20d0-11ea-aa35-8137046d754e
2019-12-17T10:24:54-02:00 [Warn] Delaying sending of additional batches by 0 seconds
2019-12-17T10:24:55-02:00 [Error] Sending segment batch failed with: SignatureDoesNotMatch: Signature expired: 20191217T122455Z is now earlier than 20191217T130959Z (20191217T132459Z - 15 min.)
status code: 403, request id: 9fee43e2-20d0-11ea-aa35-8137046d754e
2019-12-17T10:24:55-02:00 [Warn] Delaying sending of additional batches by 0 seconds
2019-12-17T10:24:56-02:00 [Error] Sending segment batch failed with: SignatureDoesNotMatch: Signature expired: 20191217T122456Z is now earlier than 20191217T131000Z (20191217T132500Z - 15 min.)
status code: 403, request id: a08972d9-20d0-11ea-aa35-8137046d754e
2019-12-17T10:24:56-02:00 [Warn] Delaying sending of additional batches by 0 seconds
2019-12-17T10:24:59-02:00 [Error] Sending segment batch failed with: SignatureDoesNotMatch: Signature expired: 20191217T122459Z is now earlier than 20191217T131003Z (20191217T132503Z - 15 min.)
status code: 403, request id: a25a3b0c-20d0-11ea-aa35-8137046d754e
2019-12-17T10:24:59-02:00 [Warn] Delaying sending of additional batches by 0 seconds
2019-12-17T10:25:04-02:00 [Error] Sending segment batch failed with: SignatureDoesNotMatch: Signature expired: 20191217T122504Z is now earlier than 20191217T131008Z (20191217T132508Z - 15 min.)
status code: 403, request id: a5611201-20d0-11ea-aa35-8137046d754e
2019-12-17T10:25:04-02:00 [Warn] Delaying sending of additional batches by 0 seconds

Sending segment batch failed with: NoCredentialProviders: no valid provider

I am running daemon locally for my Django project to get traces.
I have configured the credentials in environment variables and ~/.aws/credentials file as well. But still facing that issue. Can anybody please help me in figuring out issue.
The error which I am facing is

Sending segment batch failed with: NoCredentialProviders: no valid provider

PS: I am using AWS-Xrays for Python. My OS is Ubuntu 18.04.

@awssandra

Support credential profiles

I just had some difficulty getting the Daemon running locally due to the way I have my credentials configured. To get things to work, I ultimately had to change my [default] credentials. This request is to add support for credential profiles. It's possible this can work another way, I just couldn't find it documented.

For reference, here is a sample credentials file.

[mike]
aws_access_key_id = AKIAZZZZZZZZZZZZ1234
aws_secret_access_key = abcdefghiljekls+JB2AupNJKv9
[corp]
aws_access_key_id = AKIAXXXXXXXXXXX5678
aws_secret_access_key = zyxwvutsrqponm+1CPiUFxYw
[default]
aws_access_key_id = AKIAZZZZZZZZZZZZ1234
aws_secret_access_key = abcdefghiljekls+JB2AupNJKv9

And here is the sample config file to go with that:

[profile mike]
region = us-east-1
[profile corp]
output = json
region = us-east-1
[profile corpdev]
output = json
region = us-east-1
role_arn = arn:aws:iam::123412341234:role/corp-dev-account-role
source_profile = corp
[default]
output = json
region = us-east-1

I would like to set an environment variable of AWS_PROFILE=corpdev or pass a command-line argument and have the Daemon use the specified profile's configuration/credentials.

I was able to confirm that the corpdev profile permissions were sufficient by running the following CLI:

aws xray put-trace-segments --trace-segment-documents "{\"trace_id\": \"1-5f84854a-b250fd33b4dd6208733a59df\", \"id\": \"ebd6c854cd28bca2\", \"start_time\": 1602520394, \"in_progress\": true, \"name\": \"Scorekeep-build\"}" --profile corpdev

I tried passing the role directly like -r "arn:aws:iam::123412341234:role/corp-dev-account-role" but that resulted in an AccessDenied error indicating the user from my [mike] (same as [default]) profile did not have permsission to assume that role (which is accurate).

In order to move on, I simply changed by [default] profile to be the [corp] profile and I pass the corp-dev-account-role ARN in as a -r command line option and it works.

This was pretty difficult to diagnose due to the default logging information available:

  • Without changing the log level, all of the credential errors are absorbed and the messages seem to indicate everything was sent fine but nothing appears in the console.
  • With logging set to dev, there are a number of JSON parsing errors that seem to indicate an error but they're apparently expected (#22).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.