Giter VIP home page Giter VIP logo

heyp-agents's Introduction

Project Structure

  • experiments/ contains all of the scripts and plotting tools for running different types of experiments.

  • go/ contains all of the Go code belonging to this project. Notable things are deploy-heyp, host-agent-sim (for faking large numbers of host agents to stress test cluster-agent), proc-heyp for processing deploy-heyp runs, and dc-control-sim for evaluating non-dynamic properties of usage collection and enforcement techniques.

  • heyp/ contains all of the C++ code belonging to this project including the cluster-agent and heyp/host-agent.

Getting Started

Install dependencies

Make sure that you have the latest version of Go (version 1.17 or later). Then run

$ export TOOLCHAIN=/path/to/install-cpp-build-tools
$ tools/install-work-toolchain.bash # install tools to $TOOLCHAIN, no root needed

and follow the instructions of tools/install-work-toolchain.bash. NOTE: some scripts require that TOOLCHAIN is set to the correct value.

Prep to run experiments

Once you've done this, to run any experiments, you'll need to build the C++ code and collect runtime dependencies.

Building C++ code:

$ bazel build --config=clang-opt //heyp/...
# or use --config=clang-dbg, --config=clang-asan, or --config=clang-tsan based on your needs

Collecting runtime dependencies:

$ tools/collect-aux-bins.bash
# NOTE: you can pass arguments to only rebuild certain tools

Now you're ready to run web and scalability experiments. For example

$ ./0-rebuild-cmds-and-bundle.bash
$ ./1-gen-configs.bash results/2022-01-04-inc inc # see config.star for the meaning of inc
$ while ! ./2-run-all.bash results/2022-01-04-inc; do sleep 30; done
$ ./3-proc-all.py results/2022-01-04-inc
# Any of the ./4- scripts, if needed

NOTE: running the data processing additionally requires R with ggplot2, jsonlite, parallel, reshape2.

Other Notes

heyp-agents's People

Contributors

goelayu avatar uluyol avatar uluyol-goog avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar

heyp-agents's Issues

Retry OS calls in vfortio

These can fail sporadically (e.g. iptables rule changes). We should retry a few times before giving up.

inc-nl no longer violates approval

I am not seeing approval violations with inc-nl since the QoS value was fixed. What's odd is that the fix should not have impacted inc-nl. This could be run-to-run variance. Need to investigate.

Unexplained throughput drop over time

Here is an example run.

image

There is enough bandwidth for all of the demand to be met, so what's going on? One suspect thing is that the client processes hit 100% CPU utilization at some point during the run. That's a starting point.

Longer LOPRI didn't work in a run

In results/2021-09-03-qoschurn-smarter-routing, LOPRI clearly has higher latency than HIPRI, but in results/2021-09-10-retry-w-loadshed, qflip_lr seems unaffected by the use of LOPRI. Looks like setting up the higher latency QoS failed, for whatever reason.

Low priority: we already have results/2021-09-03-qoschurn-smarter-routing.

Invalid frac_lopri crashes cluster-agent

Seen today

cluster-agent: heyp/cluster-agent/allocator.cc:361: auto heyp::(anonymous namespace)::DowngradeAllocator::AllocAgg(absl::Time, const proto::AggInfo &, proto::DebugAllocRecord::DebugState *)::(anonymous class)::operator()() const: Assertion `false && "frac_lopri >= 0"' failed.

frac_lopri is likely a NaN since I can't think of a reason it would end up < 0. Instead of treating this a fatal error, we should issue a warning and clamp the value into [0, 1].

experiments/dc-sim: rate limit error is suspect

Uniform is too accurate—10 samples is not going to give that accuracy all the time—something is going wrong. Steps to try: look at the raw limit values, look at the uniform dist estimator.

Using multiple backends can lead to throughput loss

As of 4864a6d, I see the following usage for inc-nl when running experiments/web:

image

The usage should be much higher (starting out much closer to the dotted line and gradually decreasing). If I change num_AA_backends to 1 then the issue goes away:

image

Not sure what the problem is. We might have too few connections, too many connections, a bottleneck in Envoy, or some issue with the Envoy config.

Check that a stuck host agent doesn't get the cluster agent stuck

In the below image, we plot a vertical line whenever the allocation at a host is changed.

image

Notice on the right that the lines get "fatter". This could be caused by the cluster agent waiting on one host agent before transmitting a request to the next which is less than ideal. We should buffer one response to each host enforcer and send them in parallel. If a host is unresponsive, then the new limits can just overwrite the buffered one since it's stale at this point anyway.

Envoy admin interface port 0

It seems like envoy's admin interface is not getting its port number plumbed through correctly.

@@ -690,4 +732,4 @@ admin:
   address:
     socket_address:
       address: 0.0.0.0
-      port_value: 5001
+      port_value: 0

Unexplained zero rate limits

I don't have the pictures on hand, but I've seen cases where rate limit = 0 at some point during the run. If I recall correctly, this was taking place in an experiment where we set host rate limits explicitly to be nonzero.

Set job field for InitSimulatedWan

This would make the flow markers consistent with what we get from the cluster agent. As of 970a031, it's unnecessary because we ignore the field. Low priority, but consistency may reduce confusion down the line.

Install and use newer ss binary.

The vfortio VMs have an old ss binary installed. We can just steal a newer one from current alpine and install it with fortio and host-agent.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.