Giter VIP home page Giter VIP logo

Comments (10)

dustin avatar dustin commented on August 19, 2024

I've modified my flags to something a bit less intense:

dish_grpc_influx.py -t 10 --all-samples -v status usage alert_detail

I'm still seeing consistent core consumption.

from starlink-grpc-tools.

dustin avatar dustin commented on August 19, 2024

Seems to be working as expected with

dish_grpc_influx.py -t 10 -v status alert_detail

from starlink-grpc-tools.

sparky8512 avatar sparky8512 commented on August 19, 2024

-t 10 is a little frequent to be polling the history buffer, which the usage group will still need, but even that should not saturate CPU time.

Running outside docker, I get about 1.3% CPU usage average running your original command (writing to a local InfluxDB instance, but that shouldn't matter to the python process running the script), and this is on a PC that is something like 15 years old.

I do see the CPU usage spike when it wakes up when I run the published neurocis/starlink-grpc-tools docker image, but not so much when I build my own docker image. Are you running the published image? What OS are you using to run Docker?

from starlink-grpc-tools.

sparky8512 avatar sparky8512 commented on August 19, 2024

To answer your original question: There are a few areas where the code is not well optimized, either for simplicity sake or to better allow for code sharing among the 4 dish_grpc_*.py scripts, but as far as I could tell, the performance impact of those decisions should be on the order of milliseconds per loop iteration.

If I had to guess, I'd say the most likely culprit for performance issues is the grpc package, especially if it is not using a native implementation. I tried building a docker image based on the Alpine python base and it was hella slow, I think for that reason.

from starlink-grpc-tools.

dustin avatar dustin commented on August 19, 2024

I'm running nixos 20.09. A nix derivative could probably make for an ideal implementation, but there's an intersection of things I'm not super familiar with. I'm currently in an acceptable state, but I thought it would be interesting since a small change seems to require tons of CPU.

from starlink-grpc-tools.

sparky8512 avatar sparky8512 commented on August 19, 2024

It's definitely interesting, but removing the usage mode group is not a small change. Without that, it won't need to poll the get_history grpc request, and will only need the get_status request, which responds with much less data than get_history. It also doesn't parse the history buffer at all, though, and since the code in starlink_grpc will compute all the history stats data regardless of what specific history mode groups were requested, it's still possible it's the parsing logic that is the performance drag.

I think I figured out what it is about my own build of the docker image that was making it faster than the published image. I have some Dockerfile changes related to the generated pb2 files and those appear to have resulted in different versions of the grpc and protobuf Python packages being installed for reasons I don't understand. Anyway, when I revert those changes, I see the same CPU spikes as the published neurocis/starlink-grpc-tools image, which looks like it's about 2 CPU seconds per loop iteration (vs the .02 CPU seconds per loop iteration either outside docker or with my changes).

That's still less a lot less than 100% constantly, but it does look like the published docker image is missing the native build of the protobuf internal implementation, so maybe that combined with some weird interaction between docker and your OS config is blowing things up.

from starlink-grpc-tools.

neurocis avatar neurocis commented on August 19, 2024

Sorry, just jumping in, are there Docker optimizations that need to be pulled for the protofiles?

from starlink-grpc-tools.

sparky8512 avatar sparky8512 commented on August 19, 2024

I think I figured out what is going on. It's not my changes, it's because my changes are causing docker build to not use a cached build image that has the Python packages already installed. Somehow, a protobuf Python package that doesn't have the native build got installed and that is now in the build cache.

If you rebuild the Docker image as-is with 'docker build --no-cache' and publish that, I think it should pick up the native protobuf build. This can be verified by shelling into the running container and doing:

ls /usr/local/lib/python3.9/site-packages/google/protobuf/internal/

If you see a file that looks like _api_implementation.cpython-39-x86_64-linux-gnu.so, that's the native build. If not, protobuf will wind up using a pure Python implementation.

We should discuss my other changes, too, but we can do that in a separate context. I'll open a specific issue for that when I get a chance. The thing I believe to have caused this issue has led me to rethink that change a little and I want to mull it over for a bit.

I don't think this issue is particularly urgent, though, so there's no rush.

from starlink-grpc-tools.

sparky8512 avatar sparky8512 commented on August 19, 2024

OK, sorry for taking so long to get to this, but the published starlink-grpc-tools docker image has now been updated with a version of the protobuf Python package that has a native Linux binary for Python 3.9. I retested with the command line options from the original comment on this issue and observe an average 0.2% CPU time usage similar to what I observe running outside docker.

@dustin, could you pull the latest neurocis/starlink-grpc-tools image and retest to confirm that this addresses the performance issue you originally reported? I expect it will at least improve things significantly. If you already have an earlier image on your system, you will need to update it with something like:

docker pull neurocis/starlink-grpc-tools

from starlink-grpc-tools.

sparky8512 avatar sparky8512 commented on August 19, 2024

I'm going to assume things are sufficiently OK now and close out this issue. Please reopen or file a new issue if performance issues persist.

from starlink-grpc-tools.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.