Giter VIP home page Giter VIP logo

ndt7-client-go's Introduction

GoDoc Build Status Coverage Status Go Report Card

ndt7 Go client

Reference ndt7 Go client implementation. Useful resources:

The master branch contains stable code. We don't promise we won't break the API, but we'll try not to.

Installing

You need Go >= 1.12. We use modules. Make sure Go knows that:

export GO111MODULE=on

Clone the repository wherever you want with

git clone https://github.com/m-lab/ndt7-client-go

From inside the repository, use go get ./cmd/ndt7-client to build the client. Binaries will be placed in $GOPATH/bin, if GOPATH is set, and in $HOME/go/bin otherwise.

If you're into a one-off install, this

go install -v github.com/m-lab/ndt7-client-go/cmd/ndt7-client@latest

is equivalent to cloning the repository, running go get ./cmd/ndt7-client, and then cancelling the repository directory.

Building with a custom client name

In case you are integrating an ndt7-client binary into a third-party application, it may be useful to build it with a custom client name. Since this value is passed to the server as metadata, doing so will allow you to retrieve measurements coming from your custom integration in Measurement Lab's data easily.

To set a custom client name at build time:

CLIENTNAME=my-custom-client-name

go build -ldflags "-X main.ClientName=$CLIENTNAME" ./cmd/ndt7-client

Prometheus Exporter

While ndt7-client is a "single shot" ndt7 client, there is also a non-interactive periodic test runner ndt7-prometheus-exporter.

Build and Run using Docker

git clone https://github.com/m-lab/ndt7-client-go
docker build -t ndt7-prometheus-exporter .

To run tests repeatedly

PORT=9191
docker run -d -p ${PORT}:8080 ndt7-prometheus-exporter

Sample Prometheus config

# scrape ndt7 test metrics
  - job_name: ndt7
    metrics_path: /metrics
    static_configs:
	  - targets:
	    # host:port of the exporter
	    - localhost:9191

# scrape ndt7-prometheus-exporter itself
  - job_name: ndt7-prometheus-exporter
    static_configs:
	  - targets:
	    # host:port of the exporter
		- localhost:9191

ndt7-client-go's People

Contributors

bassosimone avatar cristinaleonr avatar fhltang avatar kwadronaut avatar pboothe avatar robertodauria avatar stephen-soltesz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ndt7-client-go's Issues

Upload tests report no results when run against non-Linux NDT server

Upload tests with ndt-server running on non-Linux (darwin) do not report any results:

$ ~/go/bin/ndt7-client -scheme wss -server 172.26.0.65:4443 -upload=true -download=false -no-verify
upload in progress with 172.26.0.65

upload: complete

Test results

    Server: 172.26.0.65
    Client: 172.26.0.65

              Upload
     Throughput:     0.0 
        Latency:     0.0 

I believe this is because ndt-server on non-Linux does not provide TCPInfo and PR #75 changed ndt7-client-go to require server-side TCPInfo for upload test results. It would be nice if there could be a fallback to client-side AppInfo in this case.

A pure upload test does not report latency

davet@milliways:~/git/ndt7-client-go$ ./ndt7-client -download=false &
[1] 2045136
upload in progress with ndt-mlab1-lax02.mlab-oti.measurement-lab.org
Avg. speed : 4.3 Mbit/s
upload: complete
Server: ndt-mlab1-lax02.mlab-oti.measurement-lab.org
Client:
Latency: 0.0
Download: 0.0 Mbit/s
Upload: 4.3 Mbit/s
Retransmission: 0.00

Error compiling ndt7-client on Windows

When I try to build ndt7-client on windows I get

C:\Users\panos\ndt7-client-go>go get ./cmd/ndt7-client

github.com/m-lab/tcp-info/inetdiag

..\go\pkg\mod\github.com\m-lab\[email protected]\inetdiag\inetdiag.go:88:2: undefined: unix.AF_INET

Is there something different I need to do to compile it on Windows?

Thanks,
Panos

Upload Test Result Inconsistency

Hi all,

I wanted to share some results from a controlled experiment I conducted using this client, as well as ask a question about how NDT7 calculates the final speed. I was interested in seeing how close to the true link capacity the ndt7-client-go could report under "perfect" network conditions. However, at low uplink capacities ( < 10Mbps), I found that ndt7-client-go reported a speed greater than the link capacity.

Preliminary Exploration

I took a look at how ndt7-client-go calculates the final speed and found that it uses information in the APPInfo messages, which, according to the protocol specification document, report a count of all bytes sent or received at the application level. However, in the example code provided in the browser version of NDT7, ndt7-js, the final speed is calculated using the TCPInfo messages, which provide network statistics from the TCP_INFO socket struct. I confirmed that the online NDT7 test calculates the speed using TCPInfo messages.

Having identified this methodological difference, I conducted an in-lab, controlled experiment to see how different the final speed would be depending on whether the TCPInfo messages or the APPInfo messages were used.

Experimental Setup

The setup is fairly simple. I use two identical System76 Meerkat Model meer6 desktop computers (Intel 11th Gen i5 @2.4Ghz, 16-GB DDR memory, and up to 2.5Gbps throughput) running Ubuntu 20.04, one to run the ndt-server and one to run the ndt7-client-go. The two computers are then connected via Ethernet cable.

I use the tc netem module to set the link capacity.

Main results

Screenshot from 2022-05-13 15-14-04
The above plot shows how the upload speed varies as the link capacity increases. In order to compare across different link capacities, I define performance, which is the speed divided by the link capacity. The orange trend line shows the speed as calculated using the APPInfo messages (what the ndt7-client-go reports to the user), while the blue trend line shows the speed as calculated using the TCPInfo messages. For consistency, I use the final APPInfo and TCPInfo messages (which track the cumulative number of sent bytes and elapsed time) to calculate the final speed. Note that the y-axis begins at 0.9. The shaded region around the trend line represents a 95% confidence interval across n=10 runs.

I have confirmed that the main ndt7-related results are not caused by an issue with the traffic shaping by conducting the same experiment using iPerf3. iPerf3 does not report speeds greater than the link capacity.

It's clear that the reported speed can vary a lot depending on whether the TCPInfo or APPInfo statistics are used. Is there a reason that ndt7-client-go uses APPInfo while ndt7-js uses TCPInfo?

UX: show client data as primary output

We must show client data as primary output, because that data will never be delayed and will always be available. It will be possible to also use ancillary data, but really client data is the thing that we should really try to use first.

Testing both up and down simultaneously would be a nice option

Only the flent rrul test attempts to test both up and down at the same time. Testing either up or down in series is genuinely useful, but most networks do not actually have everyone downloading, only, and then uploading only, but a mix of the two.

So being able to optionally fire off a test in both directions at the same time would be quite revealing as to what happens to many researchers in the field.

Installation/Run

Hello,
could you elaborate on how to install/run the client?

Many thanks,
George

Run client (ndt7-prometheus-exporter) against multiple servers

I would like to run the client against four-ish servers, to monitor different links in my network, for comparison.

I don't see a way to do that with the current code, unless I have missed something.

I could run 4 exporters, but of course I don't want them to test at the same time, as they will all compete for the local connection.

  • Is there a way to do this?
  • Should I modify the code to accomodate my desire, or is this a bad idea?
    • I haven't developed in Go before, but the code seems fairly straight forward.
    • Need to understand this memoryless ticker Poisson process thing...

Thanks

Make the summary data hierarchical

This has been suggested by @stephen-soltesz here . The summary output should include two embedded objects, Download and Upload, containing the information for that specific test. e.g.

{
  "Download": {
    "UUID": "...",
    "Speed": {
      "Value": 0,
      "Unit": "Mbit/s"
    },
    "StartTime": "...",
    "EndTime": "..."
  }.
  "Upload": { ... },
  [...]
}

Getting "websocket: bad handshake" response when using WS scheme

I am getting the following error when using the go client:

    starting download
    download failed: websocket: bad handshake
    download: complete
    
    starting upload
    upload failed: websocket: bad handshake
    upload: complete

But if I use the default:

    starting download
    download in progress with ndt-iupui-mlab3-sea03.mlab-oti.measurement-lab.org
    
    Avg. speed  :     0.1 Mbit/s
    Avg. speed  :     0.1 Mbit/s
    Avg. speed  :     0.1 Mbit/s
    Avg. speed  :     0.1 Mbit/s
    Avg. speed  :     0.0 Mbit/s2020/10/23 10:55:15 ERROR: trafficshape: error on throttled read: read tcp 192.168.1.2:55008->173.205.3.37:443: i/o timeout
    
    download: complete
    
    starting upload
    upload in progress with ndt-iupui-mlab3-sea03.mlab-oti.measurement-lab.org
    
    Avg. speed  :     1.2 Mbit/s
    Avg. speed  :     1.1 Mbit/s
    Avg. speed  :     1.0 Mbit/s
    Avg. speed  :     0.9 Mbit/s
    Avg. speed  :     0.9 Mbit/s
    Avg. speed  :     0.9 Mbit/s
    Avg. speed  :     0.9 Mbit/s
    Avg. speed  :     0.8 Mbit/s
    Avg. speed  :     0.7 Mbit/s
    Avg. speed  :     0.6 Mbit/s
    Avg. speed  :     0.5 Mbit/s
    Avg. speed  :     0.3 Mbit/s2020/10/23 10:55:32 ERROR: trafficshape: failed write: write tcp 192.168.1.2:55014->173.205.3.37:443: i/o timeout
    2020/10/23 10:55:32 ERROR: trafficshape: failed write: write tcp 192.168.1.2:55014->173.205.3.37:443: i/o timeout
    
    upload: complete
             Server: ndt-iupui-mlab3-sea03.mlab-oti.measurement-lab.org
             Client: 
            Latency:     0.0 
           Download:     0.0 Mbit/s
             Upload:     0.3 Mbit/s
     Retransmission:    0.00

With the ARM devices I am using the WSS scheme yields significantly lower results. but this error has not been observed before. Is this possibly due to distance or speed?

Try next available server on doConnection failure

The Locate API returns multiple available servers for the client to use. The ndt server may always decline to perform a measurement if the server is overloaded, in lame duck mode, or has detected abusive behavior. So, clients should try the next available server returned by the Locate API to provide resilience to any single server failing to run a measurement.

This section should be in a loop, breaking only once all candidate servers have been tried:

ndt7-client-go/ndt7.go

Lines 222 to 234 in 10bc591

s, err := c.getURLforPath(ctx, p)
if err != nil {
return nil, err
}
u, err := url.Parse(s)
if err != nil {
return nil, err
}
c.FQDN = u.Hostname()
conn, err := c.doConnect(ctx, u.String())
if err != nil {
return nil, err
}

Possible easier to use top-level API

It seems the current API based on channels is not very comfortable to use. A better design would probably be to expose a blocking API with a callback, for example:

func (c *Client) Run(ctx context.Context, fn func(m *spec.Measurement)) (summary Summary, err error) {
  var ch chan spec.Measurement
  ch, err = c.StartDownload(ctx)
  if err != nil {
    return
  }
  for m := range ch {
    // TODO: fill the summary
    fn(m)
  }
  ch, err = c.StartUpload(ctx)
  if err != nil {
    return
  }
  for m := range ch {
    // TODO: fill the summary
    fn(m)
  }
  return
}

This may or may not be combined with a summary. Note that this API does not cause data races because the callback is called in the same thread context of the blocking Run call.

document missing dependencies to compile for Windows

The ndt7-client-go has dependencies that require new development to support compiling for Windows.

  • document the dependencies missing in order for this client to work under Windows
  • write shims for Windows to support all the functionality used under Linux so that the code compiles

Option / flag to use IPv4 or IPv6

It would be nice to have a user option /flag to specify whether to use IPv4 or IPv6, as route differences can influence the test results.

"bad handshake" for upload after download test

This is similar to #56 but I am not trying to bypass locate service, so should be using token correctly.

Almost always (but not 100%), upload test will fail with "bad handshake" after performing successful download test:

$ ./go/bin/ndt7-client 
download in progress with ndt-mlab1-iad02.mlab-oti.measurement-lab.org
Avg. speed  :  2922.1 Mbit/s
download: complete
upload failed: websocket: bad handshake

upload: complete

$ ~/go/bin/ndt7-client 
download in progress with ndt-mlab2-iad02.mlab-oti.measurement-lab.org
Avg. speed  :  2915.8 Mbit/s
download: complete
upload failed: websocket: bad handshake

upload: complete

$ ./go/bin/ndt7-client 
download in progress with ndt-mlab1-iad03.mlab-oti.measurement-lab.org
Avg. speed  :  2457.6 Mbit/s
download: complete
upload failed: websocket: bad handshake

upload: complete

but not always:

$ ~/go/bin/ndt7-client 
download in progress with ndt-mlab2-iad02.mlab-oti.measurement-lab.org
Avg. speed  :  2611.2 Mbit/s
download: complete
upload in progress with ndt-mlab2-iad02.mlab-oti.measurement-lab.org
Avg. speed  :  2913.2 Mbit/s
upload: complete
         Server: ndt-mlab2-iad02.mlab-oti.measurement-lab.org
         Client: 2600:1f18:143d:xxxx:xxxx:xxxx:xxxx:xxxx
        Latency:     0.5 ms
       Download:  2611.2 Mbit/s
         Upload:  2913.2 Mbit/s
 Retransmission:    0.08 %

If I explicitly disable download, upload test alone always succeeds:

$ ~/go/bin/ndt7-client -download=false
upload in progress with ndt-mlab1-iad06.mlab-oti.measurement-lab.org
Avg. speed  :  2916.5 Mbit/s
upload: complete
         Server: ndt-mlab1-iad06.mlab-oti.measurement-lab.org
         Client: 
        Latency:     0.0 
       Download:     0.0 Mbit/s
         Upload:  2916.5 Mbit/s
 Retransmission:    0.00 

$ ~/go/bin/ndt7-client -download=false
upload in progress with ndt-mlab1-iad04.mlab-oti.measurement-lab.org
Avg. speed  :  2877.1 Mbit/s
upload: complete
         Server: ndt-mlab1-iad04.mlab-oti.measurement-lab.org
         Client: 
        Latency:     0.0 
       Download:     0.0 Mbit/s
         Upload:  2877.1 Mbit/s
 Retransmission:    0.00 

$ ~/go/bin/ndt7-client -download=false
upload in progress with ndt-mlab2-iad05.mlab-oti.measurement-lab.org
Avg. speed  :  3072.7 Mbit/s
upload: complete
         Server: ndt-mlab2-iad05.mlab-oti.measurement-lab.org
         Client: 
        Latency:     0.0 
       Download:     0.0 Mbit/s
         Upload:  3072.7 Mbit/s
 Retransmission:    0.00 

I had thought it might be a problem with token reuse, but when I explicitly reuse same token over separate calls to download and upload it works seemingly reliably (these all used same token):

$ ~/go/bin/ndt7-client -service-url 'wss://ndt-mlab1-iad02.mlab-oti.measurement-lab.org/ndt/v7/download?access_token=ACCESS_TOKEN_OBTAINED_FROM_LOCATE'
download in progress with ndt-mlab1-iad02.mlab-oti.measurement-lab.org
Avg. speed  :  3235.5 Mbit/s
download: complete
         Server: ndt-mlab1-iad02.mlab-oti.measurement-lab.org
         Client: 2600:1f18:143d:xxxx:xxxx:xxxx:xxxx:xxxx
        Latency:     1.0 ms
       Download:  3235.5 
         Upload:     0.0 
 Retransmission:    0.12 %

$ ~/go/bin/ndt7-client -service-url 'wss://ndt-mlab1-iad02.mlab-oti.measurement-lab.org/ndt/v7/download?access_token=ACCESS_TOKEN_OBTAINED_FROM_LOCATE'
download in progress with ndt-mlab1-iad02.mlab-oti.measurement-lab.org
Avg. speed  :  3282.7 Mbit/s
download: complete
         Server: ndt-mlab1-iad02.mlab-oti.measurement-lab.org
         Client: 2600:1f18:143d:xxxx:xxxx:xxxx:xxxx:xxxx
        Latency:     1.0 ms
       Download:  3282.7 
         Upload:     0.0 
 Retransmission:    0.08 %

$ ~/go/bin/ndt7-client -service-url 'wss://ndt-mlab1-iad02.mlab-oti.measurement-lab.org/ndt/v7/upload?access_token=ACCESS_TOKEN_OBTAINED_FROM_LOCATE'
upload in progress with ndt-mlab1-iad02.mlab-oti.measurement-lab.org
Avg. speed  :  2639.4 Mbit/s
upload: complete
         Server: ndt-mlab1-iad02.mlab-oti.measurement-lab.org
         Client: 
        Latency:     0.0 
       Download:     0.0 Mbit/s
         Upload:  2639.4 Mbit/s
 Retransmission:    0.00 

$ ~/go/bin/ndt7-client -service-url 'wss://ndt-mlab1-iad02.mlab-oti.measurement-lab.org/ndt/v7/upload?access_token=ACCESS_TOKEN_OBTAINED_FROM_LOCATE'
upload in progress with ndt-mlab1-iad02.mlab-oti.measurement-lab.org
Avg. speed  :  2887.4 Mbit/s
upload: complete
         Server: ndt-mlab1-iad02.mlab-oti.measurement-lab.org
         Client: 
        Latency:     0.0 
       Download:     0.0 Mbit/s
         Upload:  2887.4 Mbit/s
 Retransmission:    0.00 

add flag to enable client metadata to be set

Our JavaScript client code provides a way to set client metadata fields such as clientApplication or clientOS that is then stored in BigQuery. For example, for ndt5, these values can be found in:

measurement-lab.ndt.ndt5.result.Control.ClientMetadata.Name
measurement-lab.ndt.ndt5.result.Control.ClientMetadata.Value

Add a flag or other means for setting this metadata in ndt7-client-go

HTTP Proxy support

Can I use ndt7-prometheus-exporter behind a http proxy?
I tried it with setting the environment https_proxy, http_proxy, HTTPS_PROXY and HTTP_PROXY but nothing worked:

My docker-compose.yml looks like this:

version: "3"

services:
  ndt-exporter:
    image: ndt7-prometheus-exporter
    ports:
      - 9191:8080
    environment:
      http_proxy: http://myproxy.com:3128
      https_proxy: http://myproxy.com:3128
      HTTP_PROXY: http://myproxy.com:3128
      HTTPS_PROXY: http://myproxy.com:3128
    command:
      - -timeout
      - 3s
      - -server
      - ndt.mydomain.com

It always looks like it trys to connect directly:

download failed: dial tcp 123.x.x.x:443: i/o timeout
upload failed: dial tcp: lookup ndt.mydomain.com: i/o timeout

Single number summaries can be misleading

I am (happily!) testing the new upload-only test and patch on my starlink terminal. And thank you for that, sorry to be a pest. It appears to report an average...

time ./ndt7-client --download=false
upload in progress with ndt-mlab3-lax05.mlab-oti.measurement-lab.org
Avg. speed  :     2.1 Mbit/s
upload: complete

Test results

    Server: ndt-mlab3-lax05.mlab-oti.measurement-lab.org
    Client: 98.97.58.24

              Upload
     Throughput:     2.1 Mbit/s
        Latency:    38.5 ms

Over the course of this test run, however, latency varied by quite a lot.

-- ndt-mlab1-lax05.mlab-oti.measurement-lab.org ping statistics ---
19 packets transmitted, 19 received, 0% packet loss, time 18020ms
rtt min/avg/max/mdev = 26.910/96.673/162.339/40.269 ms

As you might imagine this much jitter is pure hell on many interactive applications. I am under the impression that the backend sampled TCP_INFO once per 10ms?

#81

ndt7 vs other speedtest tools

Dear,

thanks for the great work on nd7 client. It's really easy to download and start using the "app". The retransmissions feature is really interesting. We've been playing around with multiple speed test tools at various vantage points. We found one particular care where Ndt7 has been presenting consistently lower results than the other speedtest tools (namely Oookla's and Iperf UDP). Would you be able to help us to understand why there is? Please, let me know if there's any debug information I can provide to help investigate this "issue". In this case, both Nd7 and Ookla run in sequence from the same device and the command runs from a crontab execution. In the picture below Ndt7 reaches 180Mbps while Ookla goes all way up to 440Mbps. We have been observing, from other instances, that the speed from all the commands tends to converge at a common speed level, generally.
I's appreciate any help. Thanks.

image

Malware reports against ndt7-client.exe

hybrid-analysis.com and www.joesandbox.com tagged a windows ndt7 binary as malware.

Given the current problems with ndt 7 on mlab3's at 17 sites, I was wondering if this might be a rogue binary that uses ndt7 as chaff to cover something nefarious. See:
https://www.joesandbox.com/analysis/208192/0/html
https://www.hybrid-analysis.com/sample/b8af2d7c80de793120bf95211ecb6956366dbd6d9b8a705f909de0cb9f8ee1d6?environmentId=120

A google search for "ndt7-client.exe" (include the quotes) finds multiple reports similar to the above.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.