Giter VIP home page Giter VIP logo

starlink-grpc-tools's People

Contributors

banqueroot avatar boswelja avatar deancording avatar jbuck2005 avatar luxifr avatar neurocis avatar peterhasse avatar sparky8512 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

starlink-grpc-tools's Issues

IndexError: list index (nnn) out of range (3155b85a fw)

Just started seeing this this morning ... not sure what's going on yet, might've started with a fw update, since it started ~3:51am, local time.

starlink-grpc-tools     | current counter:       23098
starlink-grpc-tools     | All samples:           900
starlink-grpc-tools     | Valid samples:         900
starlink-grpc-tools     | Traceback (most recent call last):
starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 330, in <module>
starlink-grpc-tools     |     main()
starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 311, in main
starlink-grpc-tools     |     rc = loop_body(opts, gstate)
starlink-grpc-tools     |   File "/app/dish_grpc_influx.py", line 254, in loop_body
starlink-grpc-tools     |     rc = dish_common.get_data(opts, gstate, cb_add_item, cb_add_sequence, add_bulk=cb_add_bulk)
starlink-grpc-tools     |   File "/app/dish_common.py", line 200, in get_data
starlink-grpc-tools     |     rc = get_history_stats(opts, gstate, add_item, add_sequence)
starlink-grpc-tools     |   File "/app/dish_common.py", line 296, in get_history_stats
starlink-grpc-tools     |     groups = starlink_grpc.history_stats(parse_samples,
starlink-grpc-tools     |   File "/app/starlink_grpc.py", line 968, in history_stats
starlink-grpc-tools     |     if not history.scheduled[i]:
starlink-grpc-tools     | IndexError: list index (598) out of range

Using the latest docker image.

Looking back at the data captured before the update, and I see it was on a different fw (ee5aa15c). What kind of diagnostics should I provide to help?

Publish to PyPI

It would be super cool if we could use this easily as a dependency in other projects. Consider publishing to a package index like https://pypi.org/

starlink_grpc documentation doesn't appear to match get_status response

Below is a sample of the get_status result from my Starlink

device_info {
  id: "0000000000-00000000-00000000"
  hardware_version: "rev3_proto2"
  software_version: "2a2e1faf-c874-430c-b4a7-b7ce5990af05.uterm.release"
  country_code: "NZ"
  utc_offset_s: 39601
  bootcount: 58
}
device_state {
  uptime_s: 638533
}
obstruction_stats {
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  wedge_abs_fraction_obstructed: 0.0
  valid_s: 638222.0
  avg_prolonged_obstruction_interval_s: nan
}
alerts {
}
downlink_throughput_bps: 25744.638671875
uplink_throughput_bps: 143302.59375
pop_ping_latency_ms: 26.399999618530273
boresight_azimuth_deg: 170.00000000000000
boresight_elevation_deg: 60.00000000000000
gps_stats {
  gps_valid: true
  gps_sats: 15
}
eth_speed_mbps: 1000
is_snr_above_noise_floor: true
ready_states {
  cady: true
  scp: true
  l1l2: true
  xphy: true
  aap: true
  rf: true
}

This does not appear to match what is documented in starlink_grpc file.
(I have redacted some information, not sure if it can be used to track me lol)

I'm wondering if square dishy sends entirely different data compared to round dishy, or if a recent software update changed a lot of things

no attribute 'wedge_abs_fraction_obstructed'

Hello, I am on starlink version ee39beae-c399-4648-a7d6-949193d8c910.uterm.release and release v1.1.0 of the python library. I am currently getting this error:

File "/usr/local/lib/python3.10/site-packages/starlink_grpc.py", line 765, in status_data
    "wedges_fraction_obstructed[]": status.obstruction_stats.wedge_abs_fraction_obstructed,
AttributeError: 'DishObstructionStats' object has no attribute 'wedge_abs_fraction_obstructed'

Make robust against field removal in grpc protocol

(These are mostly notes to self, for anyone wondering why I'm being so long-winded about this...)

Several times in the past, SpaceX has removed fields from the dish's grpc protocol that the scripts in this project were using, resulting in a script crash on attempt to read any data from the same category (status, history stats, bulk history, or location). While loss of that data is unavoidable unless it can be reconstructed from some other field, it would be much better if it didn't cause a crash. I mentioned in issue #65 that I would put some thought into how to accomplish that. After all, one of the intentions of the core module (starlink_grpc.py) is to insulate the calling scripts from the details of the grpc protocol, even though it's mostly just passing the data along as-is in dictionary form instead of protobuf structure.

The problem here is a result of using grpc reflection to pull the protobuf message structure definitions instead of pre-compiling them using protoc and delivering them along with this project. It's pretty clear from the protobuf documentation that the intended usage is to pre-compile the protocol definitions, but there's a specific reason why I don't do it that way: SpaceX has not published those protocol definitions other than by leaving the reflection service enabled, and thus have not published them under a license that would allow for redistribution without violating copyright. Whether or not they care is a different story, but I'd prefer not to get their legal team thinking about why reflection is even enabled.

Before I built use of reflection into the scripts (by way of yagrc), I avoided the copyright question by making users generate the protocol modules themselves. This complicated installation, caused some other problems, and ultimately didn't actually fix this problem, it just moved it earlier, since it was still just getting the protocol definitions via reflection.

So, other than use of pre-compiled protocol definitions, I can think of 2 main options:

  1. Wrap all (or at least most) access to message structure attributes inside a try clause that catches AttributeError or otherwise access them in a way that won't break if the attribute doesn't exist. This could get messy fast, since I don't want a single missing attribute to cause total failure, but would probably be manageable. It also really fights against the design intent of how rigidly these structures are defined by the protoc compiler, but I find myself not really caring about that, as it's exactly this rigidity that is causing the problem here.

  2. Stop using the protocol message structures entirely and switch to pulling fields by protobuf ID instead. This would make the guts of the code less readable, but would insolate this project against the possibility that SpaceX ever decides to disable the reflection service. I have no reason to believe they would do so, but I also have little reason to believe they wouldn't. I'm not sure how difficult this would be, though, as the low-level protobuf Python module is not meant to be used in this way, as far as I can tell. Also, this would still cause problems if SpaceX were to reuse field IDs from obsoleted fields that have been removed, but they haven't been doing that (and it's bad practice in general, as it can break their own client software), as far as I can tell. If I were writing this from scratch, knowing what I do now, I would probably try to do it this way.

However it's done, I should note that this will make obsoleted fields less obvious. The removed data will just stop being reported by the tools. I have been using these failures as an opportunity to update the documentation on which items are obsolete, but I suspect this will still get noticed, and I'd rather have the documentation lag behind a bit than have the scripts break.

MQTT Mode?

Having trouble getting this to work with MQTT. Am able to run, but when I try to use the MQTT script, I'm running into issues with the mode option, assuming I'm not doing something right.

Here is what I have

python3 dish_grpc_mqtt.py --hostname xxxxxx --port 1883 --username mqtt-user --password xxxxxxxxx mode usage

I've tried different variations, with 'usage' mode=usage --mode, etc, etc. Any thougts?

Add periodic loop option to all polling scripts

The dishStatusInflux.py script implements status info polling in a periodic loop (although it can now be run in one-shot mode, too, by passing -t 0 option).

The other status and history stats polling scripts are all currently one-shot only.

They should all implement an interval timing loop to (optionally) poll and report info periodically. And they should all default to the same behavior, which would mean either changing dishStatusInflux.py to be one-shot by default or changing the others to poll every 30 seconds by default. Probably the former.

The loop in dishStatusInflux.py got a little messy, but that's mostly due to keeping the grpc channel open (as well as the InfluxDB client). That's not strictly necessary.

Did Starlink disable the gRPC/web interface?

Thanks again for this exporter! It was amazing, but a few months ago stopped working for me. My LAN is a 10., with a static route to 192.168.100.1 in the router, out to the Gen 2 Dishy via the ethernet adapter. Again, it was working fine for months.

Now, neither the gRPC or nor legacy metrics web server seem to be available, and Starlink support has no answers. Has anyone else seen gRPC go from working to vanished?

TypedDicts have type Any in IDE

Example: (variable) ObstructionDict: Type[ObstructionDict] | Any
This means status_data returns Any (except for AlertDict, which appears correct).

I'm playing around locally now, I converted StatusDict to a class def like so

class StatusDict(TypedDict):
    id: str
    hardware_version: str
    software_version: str
    state: str
    uptime: int
    snr: Optional[float]
    seconds_to_first_nonempty_slot: float
    pop_ping_drop_rate: float
    downlink_throughput_bps: float
    uplink_throughput_bps: float
    pop_ping_latency_ms: float
    alerts: int
    fraction_obstructed: float
    currently_obstructed: bool
    seconds_obstructed: Optional[float]
    obstruction_duration: Optional[float]
    obstruction_interval: Optional[float]
    direction_azimuth: float
    direction_elevation: float
    is_snr_above_noise_floor: bool

Which fixes the Any issue, but now I don't get type hints for variables in the class, I suspect this is caused by the def TypedDict

Multiple issues running tools on new(er) dish

Running a new(er) dish/router, and ran into some issues -

  1. It seems the new port is 192.168.1.1:9000 for the RPC calls
  2. I wasn't able to use any of the "spacex.api" files until I touched a init.py
  3. My system (Older Pi running stretch) couldn't find influxdb and the whole requirements.txt install failed
  4. Traceback (most recent call last):
    File "dump_dish_status.py", line 25, in
    print("Connected" if response.dish_get_status.state ==
    AttributeError: 'DishGetStatusResponse' object has no attribute 'state'
  5. To be able to run some things, I needed to create spacex/api/device/dish_config.proto
  6. python3 dish_obstruction_map.py -e 192.168.1.1:9000 obstruction.png
    Traceback (most recent call last):
    File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1242, in obstruction_map
    map_data = get_obstruction_map(context)
    File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1222, in get_obstruction_map
    return call_with_channel(grpc_call, context=context)
    File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 427, in call_with_channel
    return function(channel, *args, **kwargs)
    File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1219, in grpc_call
    timeout=REQUEST_TIMEOUT)
    File "/home/pi/.local/lib/python3.5/site-packages/grpc/_channel.py", line 946, in call
    return _end_unary_response_blocking(state, call, False, None)
    File "/home/pi/.local/lib/python3.5/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
    grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
    status = StatusCode.UNIMPLEMENTED
    details = "Unimplemented: *device.Request_DishGetObstructionMap"
    debug_error_string = "{"created":"@1653509590.780266717","description":"Error received from peer ipv4:192.168.1.1:9000","file":"src/core/lib/surface/call.cc","file_line":1070,"grpc_message":"Unimplemented: *device.Request_DishGetObstructionMap","grpc_status":12}"

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "dish_obstruction_map.py", line 186, in
main()
File "dish_obstruction_map.py", line 172, in main
rc = loop_body(opts, context)
File "dish_obstruction_map.py", line 28, in loop_body
snr_data = starlink_grpc.obstruction_map(context)
File "/home/pi/starlink-grpc-tools/starlink_grpc.py", line 1244, in obstruction_map
raise GrpcError(e)
starlink_grpc.GrpcError: Unimplemented: *device.Request_DishGetObstructionMap

but in spacex/api/device/dish_pb2.py I see -
_DISHGETOBSTRUCTIONMAPREQUEST = _descriptor.Descriptor(
name='DishGetObstructionMapRequest',
full_name='SpaceX.API.Device.DishGetObstructionMapRequest',

and

_DISHGETOBSTRUCTIONMAPRESPONSE = _descriptor.Descriptor(
name='DishGetObstructionMapResponse',
full_name='SpaceX.API.Device.DishGetObstructionMapResponse',

But not sure why now.

Anything I can do for testing/help, lemme know!

Tnx, Tuc

install problems with python-3.10.x

On a Fedora35-x86_64 system in a freshly created virtenv, with python-3.10.5, the spacex pkg is never installed by default, and even after manually installing it, other packages are missing/broken:

(sl) [netllama@hal starlink-grpc-tools]$ pip install --upgrade -r requirements.txt
Collecting grpcio>=1.12.0
  Downloading grpcio-1.47.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.5 MB)
     |████████████████████████████████| 4.5 MB 1.8 MB/s 
Collecting grpcio-tools>=1.20.0
  Downloading grpcio_tools-1.47.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.4 MB)
     |████████████████████████████████| 2.4 MB 2.0 MB/s 
Collecting protobuf>=3.6.0
  Downloading protobuf-4.21.5-cp37-abi3-manylinux2014_x86_64.whl (408 kB)
     |████████████████████████████████| 408 kB 1.9 MB/s 
Collecting yagrc>=1.1.1
  Downloading yagrc-1.1.1-py3-none-any.whl (15 kB)
Collecting paho-mqtt>=1.5.1
  Downloading paho-mqtt-1.6.1.tar.gz (99 kB)
     |████████████████████████████████| 99 kB 1.6 MB/s 
Collecting influxdb>=5.3.1
  Downloading influxdb-5.3.1-py2.py3-none-any.whl (77 kB)
     |████████████████████████████████| 77 kB 2.0 MB/s 
Collecting influxdb_client>=1.23.0
  Downloading influxdb_client-1.31.0-py3-none-any.whl (705 kB)
     |████████████████████████████████| 705 kB 1.6 MB/s 
Collecting pypng>=0.0.20
  Downloading pypng-0.20220715.0-py3-none-any.whl (58 kB)
     |████████████████████████████████| 58 kB 1.8 MB/s 
Collecting six>=1.5.2
  Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting protobuf>=3.6.0
  Downloading protobuf-3.20.1-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
     |████████████████████████████████| 1.1 MB 1.4 MB/s 
Requirement already satisfied: setuptools in /home/netllama/stuff/sl/sl/lib/python3.10/site-packages (from grpcio-tools>=1.20.0->-r requirements.txt (line 2)) (57.4.0)
Collecting grpcio-reflection>=1.7.3
  Downloading grpcio_reflection-1.47.0-py3-none-any.whl (16 kB)
Collecting requests>=2.17.0
  Downloading requests-2.28.1-py3-none-any.whl (62 kB)
     |████████████████████████████████| 62 kB 555 kB/s 
Collecting msgpack
  Downloading msgpack-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (316 kB)
     |████████████████████████████████| 316 kB 689 kB/s 
Collecting pytz
  Downloading pytz-2022.2.1-py2.py3-none-any.whl (500 kB)
     |████████████████████████████████| 500 kB 890 kB/s 
Collecting python-dateutil>=2.6.0
  Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
     |████████████████████████████████| 247 kB 368 kB/s 
Collecting certifi>=14.05.14
  Downloading certifi-2022.6.15-py3-none-any.whl (160 kB)
     |████████████████████████████████| 160 kB 341 kB/s 
Collecting rx>=3.0.1
  Downloading Rx-3.2.0-py3-none-any.whl (199 kB)
     |████████████████████████████████| 199 kB 379 kB/s 
Collecting urllib3>=1.26.0
  Downloading urllib3-1.26.11-py2.py3-none-any.whl (139 kB)
     |████████████████████████████████| 139 kB 236 kB/s 
Collecting charset-normalizer<3,>=2
  Downloading charset_normalizer-2.1.0-py3-none-any.whl (39 kB)
Collecting idna<4,>=2.5
  Downloading idna-3.3-py3-none-any.whl (61 kB)
     |████████████████████████████████| 61 kB 265 kB/s 
Using legacy 'setup.py install' for paho-mqtt, since package 'wheel' is not installed.
Installing collected packages: six, urllib3, protobuf, idna, grpcio, charset-normalizer, certifi, rx, requests, pytz, python-dateutil, msgpack, grpcio-reflection, yagrc, pypng, paho-mqtt, influxdb-client, influxdb, grpcio-tools
    Running setup.py install for paho-mqtt ... done
Successfully installed certifi-2022.6.15 charset-normalizer-2.1.0 grpcio-1.47.0 grpcio-reflection-1.47.0 grpcio-tools-1.47.0 idna-3.3 influxdb-5.3.1 influxdb-client-1.31.0 msgpack-1.0.4 paho-mqtt-1.6.1 protobuf-3.20.1 pypng-0.20220715.0 python-dateutil-2.8.2 pytz-2022.2.1 requests-2.28.1 rx-3.2.0 six-1.16.0 urllib3-1.26.11 yagrc-1.1.1
WARNING: You are using pip version 21.2.3; however, version 22.2.2 is available.
You should consider upgrading via the '/home/netllama/stuff/sl/sl/bin/python3 -m pip install --upgrade pip' command.
(sl) [netllama@hal starlink-grpc-tools]$ python3 dump_dish_status.py
Traceback (most recent call last):
  File "/home/netllama/stuff/starlink-grpc-tools/dump_dish_status.py", line 6, in <module>
    from spacex.api.device import device_pb2
ModuleNotFoundError: No module named 'spacex'
(sl) [netllama@hal starlink-grpc-tools]$ pip install spacex
Collecting spacex
  Downloading spacex-0.0.2.tar.gz (5.4 kB)
Collecting requests==2.18.4
  Downloading requests-2.18.4-py2.py3-none-any.whl (88 kB)
     |████████████████████████████████| 88 kB 812 kB/s 
Collecting chardet<3.1.0,>=3.0.2
  Using cached chardet-3.0.4-py2.py3-none-any.whl (133 kB)
Collecting urllib3<1.23,>=1.21.1
  Downloading urllib3-1.22-py2.py3-none-any.whl (132 kB)
     |████████████████████████████████| 132 kB 1.0 MB/s 
Collecting idna<2.7,>=2.5
  Downloading idna-2.6-py2.py3-none-any.whl (56 kB)
     |████████████████████████████████| 56 kB 1.2 MB/s 
Requirement already satisfied: certifi>=2017.4.17 in /home/netllama/stuff/sl/sl/lib/python3.10/site-packages (from requests==2.18.4->spacex) (2022.6.15)
Using legacy 'setup.py install' for spacex, since package 'wheel' is not installed.
Installing collected packages: urllib3, idna, chardet, requests, spacex
  Attempting uninstall: urllib3
    Found existing installation: urllib3 1.26.11
    Uninstalling urllib3-1.26.11:
      Successfully uninstalled urllib3-1.26.11
  Attempting uninstall: idna
    Found existing installation: idna 3.3
    Uninstalling idna-3.3:
      Successfully uninstalled idna-3.3
  Attempting uninstall: requests
    Found existing installation: requests 2.28.1
    Uninstalling requests-2.28.1:
      Successfully uninstalled requests-2.28.1
    Running setup.py install for spacex ... done
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
influxdb-client 1.31.0 requires urllib3>=1.26.0, but you have urllib3 1.22 which is incompatible.
Successfully installed chardet-3.0.4 idna-2.6 requests-2.18.4 spacex-0.0.2 urllib3-1.22
WARNING: You are using pip version 21.2.3; however, version 22.2.2 is available.
You should consider upgrading via the '/home/netllama/stuff/sl/sl/bin/python3 -m pip install --upgrade pip' command.
(sl) [netllama@hal starlink-grpc-tools]$ python3 dump_dish_status.py
Traceback (most recent call last):
  File "/home/netllama/stuff/starlink-grpc-tools/dump_dish_status.py", line 6, in <module>
    from spacex.api.device import device_pb2
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/spacex/__init__.py", line 13, in <module>
    from .launchpad import *
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/spacex/launchpad.py", line 1, in <module>
    from .http import request
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/spacex/http.py", line 1, in <module>
    import requests
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/requests/__init__.py", line 43, in <module>
    import urllib3
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/urllib3/__init__.py", line 8, in <module>
    from .connectionpool import (
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/urllib3/connectionpool.py", line 29, in <module>
    from .connection import (
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/urllib3/connection.py", line 39, in <module>
    from .util.ssl_ import (
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/urllib3/util/__init__.py", line 3, in <module>
    from .connection import is_connection_dropped
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/urllib3/util/connection.py", line 3, in <module>
    from .wait import wait_for_read
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/urllib3/util/wait.py", line 1, in <module>
    from .selectors import (
  File "/home/netllama/stuff/sl/sl/lib64/python3.10/site-packages/urllib3/util/selectors.py", line 14, in <module>
    from collections import namedtuple, Mapping
ImportError: cannot import name 'Mapping' from 'collections' (/usr/lib64/python3.10/collections/__init__.py)
(sl) [netllama@hal starlink-grpc-tools]$ which python
~/stuff/sl/sl/bin/python
(sl) [netllama@hal starlink-grpc-tools]$ python -V
Python 3.10.5

seconds_obstructed confusion

Hi, I'm pushing some data from my starlink into influxdb using your tools. I'm a grateful data hoarder.

I'm a bit confused by seconds_obstructed, though. The docs suggest it's the number of seconds of downtime in the last 12 hours, but the numbers don't match for me.

I'm currently getting the following:

> select last(seconds_obstructed) from "spacex.starlink.user_terminal.status"
name: spacex.starlink.user_terminal.status
time                last
----                ----
1614366741000000000 1933

However, the web UI suggests I have much less downtime:

Screen Shot 2021-02-26 at 11 13 53

The mobile app says the same, but 12 hours instead of 24 hours.

It's not clear what's incorrect, but I'd like a happier dashboard. :)

Add more history stats

Right now, the stats computed from the history data are all about packet loss, because that's mostly what I'm interested in tracking myself.

However, users over on the Starlink subreddit seem really interested in latency stats over time and there has been a question recently about tracking upload/download usage. Both these things are reported by the status info scripts, but are only instantaneous numbers. This data is also present in the history data and it should be easy enough to add computation of more robust stats from that.

Consecutive runs of dishHistoryInflux.py generate error.

First run executes and populates as follows:

> SELECT * FROM "spacex.starlink.user_terminal.ping_stats"
name: spacex.starlink.user_terminal.ping_stats
time                count_full_obstructed_ping_drop count_full_ping_drop count_full_unscheduled_ping_drop count_obstructed count_unscheduled id                           samples total_obstructed_ping_drop total_ping_drop   total_unscheduled_ping_drop
----                ------------------------------- -------------------- -------------------------------- ---------------- ----------------- --                           ------- -------------------------- ---------------   ---------------------------
1610416033180088000 0                               0                    0                                0                0                 ut01000000-00000000-000xxxxx 3600    0                          5.000181436538696 0

Consecutive runs error with:

SpaceX.API.Device.Device is a service:
service Device {
  rpc Handle ( .SpaceX.API.Device.Request ) returns ( .SpaceX.API.Device.Response );
  rpc Stream ( stream .SpaceX.API.Device.ToDevice ) returns ( stream .SpaceX.API.Device.FromDevice );
}
Failed writing to InfluxDB database: 400: {"error":"partial write: field type conflict: input field \"count_full_obstructed_ping_drop\" on measurement \"spacex.starlink.user_terminal.ping_stats\" is type float, already exists as type integer dropped=1"}

FR: Dockerize

Make this into a Docker container. I may be able to take this on and submit a PR.

SNR may have broken with a recent update

Just got 3155b85a-29b0-4de0-8c5e-c23c321bf245.uterm.release and I've not looked into it much, but at least my SNR graph looks broken while the rest of the stuff seems to be working.

Screen Shot 2021-10-17 at 10 32 35

Hangs after starting influx script

Environment:
Server OS CentOS 8 - 4.18.0-305.19.1.el8_4.x86_64
Python venv version 3.6.8

command: python3 dish_grpc_influx.py -v status alert_detail -t 10 -U <username> -P <password>

Results:
Running command, in this case it's running as a service, but does the same then when not running as a service.

Nov 02 22:18:17 BIL-STARLINK systemd[1]: Started Starlink Influx Script.

Hangs at this point and never runs.

dump_dish_status.py fails

It tries to import module starlink.api.device, which is not installed by the requirements.txt file.

Need to wrap Environment variables within /etc/systemd/system/starlink-influx2.service with " "

Summary: When either the influxdb TOKEN or the ORG contain equal (=) signs, the script does NOT run properly

For example, my throw-away token: atIde3fpEpwT2BersHMq7iMBNOJDOfYfGXS8QDX4Lzf4c2hbTy1HwAaZSPAUWOAQH4tPT4BCIvsowoOv7DlHeA==

would break the current version of the system file due to the two equal signs at the end

Fix: Add " " to the Environment line to encapsulate the values

Running the systemd command to start logging starlink data fails with:

Nov 10 12:30:58 hostname python3[323035]: usage: dish_grpc_influx2.py [-g TARGET] [-h] [-N] [-t LOOP_INTERVAL] [-v] [-a]
Nov 10 12:30:58 hostname python3[323035]: [-o N] [-s SAMPLES] [-j] [-u URL] [-T TOKEN]
Nov 10 12:30:58 hostname python3[323035]: [-B BUCKET] [-O ORG] [-k] [-C FILENAME] [-I]
Nov 10 12:30:58 hostname python3[323035]: mode [mode ...]
Nov 10 12:30:58 hostname python3[323035]: dish_grpc_influx2.py: error: SSL options only apply to HTTPS URLs
Nov 10 12:30:58 hostname systemd[1]: starlink-influx2.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 10 12:30:58 hostname systemd[1]: starlink-influx2.service: Failed with result 'exit-code'.

My system file is:

[Unit]
Description=Starlink GRPC to InfluxDB 2.x exporter
After=network.target
[Service]
Type=simple
WorkingDirectory=/opt/starlink-grpc-tools/
Environment=INFLUXDB_URL=127.0.0.1:8086 INFLUXDB_TOKEN=atIde3fpEpwT2BersHMq7iMBNOJDOfYfGXS8QDX4Lzf4c2hbTy1HwAaZSPAUWOAQH4tPT4BCIvsowoOv7DlHeA== INFLUXDB_Bucket=starlink INFLUXDB_ORG=1234b000c1234a12 INFLUXDB_SSL=false
ExecStart=/opt/starlink-grpc-tools/venv/bin/python3 /opt/starlink-grpc-tools/dish_grpc_influx2.py -t 10 status alert_detail
[Install]
WantedBy=multi-user.target

When I modified the Environment statement to include " ":

Environment=" (same as above) ''

The script runs fine.

Feature Request: Dish orientation control

Feature Request:
If you can accomplish the Stow functionality, perhaps you can also allow for manual orientation control.

Problem It Solves:
In the mountainous backwoods of the inland Northwest, there are peculiar locations with very limited LOS. Trees rise over 100ft and rising elevations limit placement within range of the required power source. Range can be increased and power can be extended to a point but there are still sometimes limited placement options.

I'm speculating, but the orientation of Dishy appears to be a function of pointing the dish up and making an initial connection (after BOOTING->SEARCHING). Everybody has LOS directly up, right? Because there is incomplete coverage of Starlink currently and this kind of location has a suboptimal FOV, we have LOS but there's no satellite there so getting that connection can take hours or days even with the fancy antenna.

Only after making the initial connection does it apparently obtain the computed optimal angle for coverage and reveal the wedgeFractionObstructedList. Iteratively trying to find an optimal placement location for Dishy is complicated by this initial connection delay in the more challenging environment. It appears that after either a GPS change or missing several predicted satellites, it returns to vertical and tries to get a new orientation recommendation (which is often very similar to the last one).

With manual orientation control, we can externally compute the optimal Dishy orientation (or hazard a guess based on past success) and apply it, or even schedule changes to it at different times of the day as needed, completely overriding the Starlink recommendations if we so choose. The end user community may come up with some better algorithms for odd use cases, like the backwoods holler or the canyons on mars.

It also enables some fun synchronized Dishy dance routines that will make Elon smile.

Add support for pulling dish_get_obstruction_map data

It looks like SpaceX added a 2 dimensional map of obstruction data (actually, it looks like it's SNR per direction) to the dish firmware at some point, as the mobile Starlink app has just added support for displaying it.

I'm not sure how useful it would be to collect this data over time, given that the dish presumably creates this map from data it has collected over time already, and the app does a decent job of visualizing it. However, it might be of some interest to poll it somewhat infrequently, say once a day or so, and see how it changes over time.

Adding this would probably require a better approach to recording array data in the database backends, though. The existing array storage is a bit too simplistic for this level of data. Also, not all the data backends would be appropriate for this. It may be better off as a completely separate script, maybe one that outputs an image instead of the raw data.

performance issue

I'm running the following (from the docker image):

dish_grpc_influx.py -t 10 --all-samples -v status obstruction_detail ping_drop ping_run_length ping_latency ping_loaded_latency usage alert_detail

This is using 100% CPU constantly. That seems excessive to convert starlink data to influxdb. Are there any known performance issues here?

Error in "Running with SystemD" instructions

In the following line, I believe that there is an 's' missing from the command:
sudo mkdir starlink-grpc-tool

Should it not read as follows instead?
sudo mkdir starlink-grpc-tools

.. next

sudo cp systemd/starlink-influx2.service /etc/systemd/starlink-influx2.service
This did not work in Ubuntu 20.04. I had to copy the service file to /etc/systemd/system - perhaps the correct instruction is:

sudo cp systemd/starlink-influx2.service /etc/systemd/system/starlink-influx2.service

and then enable and start the scripts

Remote API

Fantastic project!

Question for the expert, we have a fleet of terminals in the field and would like to ship / retrieve terminal data to a central location. Ideally something like an AWS server pulling / polling data from remote terminals.

The first (and possibly unsurmountable challenge) is that starlink doesn't utilize a public ip / non-nat option. This may be a complete deal breaker...

Do you know of any mechanism to retrieve terminal data via the "land side"?

One thought is a small/simple wifi based something that pairs with every terminal and simply pipes this data back towards us. Adding more hardware to every install is a pain, but would be a guaranteed option/solution.

Doesn't like 3.10.3 or 3.10.4 but is fine with 3.8.13

I get:
File "C:\Users\craso\Documents\GitHub\starlink-grpc-tools\dish_grpc_text.py", line 21, in
import dish_common
File "C:\Users\craso\Documents\GitHub\starlink-grpc-tools\dish_common.py", line 22, in
import starlink_grpc
File "C:\Users\craso\Documents\GitHub\starlink-grpc-tools\starlink_grpc.py", line 351, in
from spacex.api.device import device_pb2
ModuleNotFoundError: No module named 'spacex'

With Python 3.10.x, running py -3.8 works though.

Dish firmware version change removed last_24h_obstructed_s field?

Since yesterday looks like I've been getting this error:
% python ./dish_grpc_text.py status Traceback (most recent call last): File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 221, in <module> main() File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 207, in main rc = loop_body(opts, gstate) File "/Users/leadzero/workspace/starlink-grpc-tools/./dish_grpc_text.py", line 174, in loop_body rc = dish_common.get_data(opts, File "/Users/leadzero/workspace/starlink-grpc-tools/dish_common.py", line 196, in get_data rc = get_status_data(opts, gstate, add_item, add_sequence) File "/Users/leadzero/workspace/starlink-grpc-tools/dish_common.py", line 231, in get_status_data groups = starlink_grpc.status_data(context=gstate.context) File "/Users/leadzero/workspace/starlink-grpc-tools/starlink_grpc.py", line 641, in status_data "seconds_obstructed": status.obstruction_stats.last_24h_obstructed_s, AttributeError: last_24h_obstructed_s

Commenting out like 641 in /Users/leadzero/workspace/starlink-grpc-tools/starlink_grpc.py seems to work to solve.. Wondering if maybe a version update yesterday caused it since it looks like my dish rebooted yesterday too.

Status works but write to influx returns "No host specified"

python3 dish_grpc_text.py -s 2 status works returns correctly with current data, but dish_grpc_influx2 returns an error.

I don't see any option to specify a host and the url works just perfect.

python3 dish_grpc_influx2.py -v -u "localhost:8086" -T "" -B StarLink -O BVRanch -s 2 status
Data points queued: 1
Data points queued: 1
Data points written: 1
ERROR: The batch item wasn't processed successfully because: No host specified.

Make work on arm64 (aka Raspberry Pi)

Here's what I get when trying to run the Docker container on my RPi4b 8GB:

$ sudo docker run --name='starlink-grpc-tools' ghcr.io/sparky8512/starlink-grpc-tools dish_grpc_text.py -v status alert_detail
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
standard_init_linux.go:219: exec user process caused: exec format error

Any chance of making an arm64 version of the container available?

-o option should aggregate beyond size of history data buffer

A recent dish firmware change appears to have reduced the amount of history data returned via gRPC call to 15 minutes, which means data must be polled more frequently than that in order to avoid losing data. This is not a big deal for bulk history data collection, as that should probably be polling more frequently than that, anyway, to minimize the amount of data lost when the dish reboots. However, for the history-based statistics, doing so would also reduce the time period over which statistics are computed, which may be undesirable.

I added the -o option a while ago in order to facilitate computation of history statistics over a longer time interval than the data is polled. This was done to minimize data loss on dish reboot, but I was hoping it would apply to this history buffer size reduction, too. Unfortunately, the current implementation of -o does not keep data past the end of the history data buffer, nor does it even detect that it is losing data. Adding support for that would make it more complicated (which is probably why I didn't do it that way to begin with), but I personally want to keep my stats gathering at hour-long intervals, so I may take a stab at doing this, anyway.

FR: Send to InfluxDB / MQTT

Send the data to an InfluxDB for later consumption by another tool ( Graphana ? )
MQTT for action by IoT systems ( Home Assistant / OpenHAB ? )

Non-verbose mode is too non-verbose

While testing recent changes against failure cases, I realized some of the errors are not being printed unless verbose option is specified. Some of the later scripts will print some errors, but that just made for inconsistency.

I originally coded the CSV output script that way because I didn't want error messages in the output file, but errors should be going to stderr instead of stdout, anyway. Probably by using the logging module.

While I'm at it, the error return from the functions in starlink_{grpc,json} should probably just raise an exception and have the calling function report the error instead of printing/logging directly and returning None.

buffering when stdout is not a terminal

python3 dish_grpc_text.py status -t 1|cat

I would expect this to display one line per second, but it seems to buffer a lot of lines, and then display them all.

My use case is parsing the status stream with a script to get obstruction information into my desktop's panel..

Type hints for `status_data` returns

So off the top of my head, there are two approaches to this we can take.

First (and my preferred) is to create a dataclass for attributes from status_data, so we would return a Tuple[StatusData, ObstructionDetails, Alerts]. This would be a breaking change, as references such as groups[0]["id"] become groups[0].id. I prefer this one because it's slightly less work to reference variables 😆

Second is to create a class extending TypedDict. Referencing variables will look the same as it is now, but groups[0]["id"] would return str rather than Any.

Both of these require creating new classes, and both mean we can deprecate both status_field_names and status_field_types, since this information is present in the return type

Support for fast_switching and fast_switching_enabled? (dish_emc)

Apparently back in April with firmware revision feb4dfde a new method was enabled called "fast_switching". I took a look and I don't think starlink-grpc-tools supports it. Should it be added? You should be able to see details on it in the .protoset I uploaded for another new method: https://github.com/sparky8512/starlink-grpc-tools/files/6640354/dish.zip

The question is what this API call does. I'm thinking / hoping that this is related to a feature they announced in email in April where Dishy will switch to a new satellite not just when the old one is going out of the circular field of view, but also when the satellite is about to be obscured by a site-specific obstruction. That's a major change that should greatly improve reliability for folks like me who have just a couple of trees in view. AFAICT this feature has never actually been turned on. At least I've never noticed a performance improvement (and thank you to your software for letting me monitor that.) Some folks on Reddit think it does work for them but it's hard to tell how reliable that report is.

Anyway, I'm wondering what the method does. From the phrasing I suspect it's a status detection only but maybe it's also a way to toggle the feature off and on?

dish_unstow command

Thank you for this repo.

Using grpcurl, it is easy to invoke the dish_stow command but I could not find the corresponding command to unstow it. Tried the dish_stow (toggling), dish_unstow, unstow to no avail. Any idea what it is?

Consider removing grpcurl / protoc commands from docker run time

I mentioned this on issue #22, but then got busy with other things. While it's not directly related to that issue, I think whatever is decided on this topic may wind up resolving the performance issue reported in that one.

A few weeks ago, I implemented reflection support for the grpc scripts, so that the scripts would work without having to generate the pb2 protocol modules in advance. I did it as a separate Python package, though, and if that's not installed (via pip), the scripts will still attempt to load the generated pb2 modules out of the import path.

Right now, the new Python package (yagrc) is not being installed by the setup in Dockerfile, but that's OK because entrypoint.sh is still generating the pb2 modules via grpcurl and protoc. I think it would be better to switch to yagrc, though, as this would avoid a recurrence of what happened in issue #18.

I was testing some changes to Dockerfile and entrypoint.sh to do this by changing the pip install command to install everything from requirements.txt instead of listing the packages individually. However, after looking into what was going on in issue #22, I'm not so sure that's a good idea. The way Docker caches intermediate steps of the image generation makes it so that the image build can get "stuck" at whatever specific package versions were latest at the time that particular build step got cached. If that's going to happen anyway, then we should probably pin the packages being installed via pip in Dockerfile to specific known good package versions.

So anyway, I propose we drop the grpcurl and protoc commands from entrypoint.sh, add yagrc to the pip install command, and pin at least some of the packages to specific version numbers. grpcurl and grpcio-tools would no longer be required, so those could then be removed from the setup. Also, the package that seems to be causing at least part of issue #22, protobuf, is not even being installed directly, so to pin that to a known good version, that would also need to be added explicitly to the pip install command.

obstruction_interval=nan causes influx ingest to fail

Using dish_grpc_influx.py, noticed that my statistics stopped being ingested. Checked logs and found many gigabytes of the following:

unable to parse 'spacex.starlink.user_terminal.status,id=ut000#####-########-######## alert_mast_not_near_vertical=False,alert_motors_stuck=False,alert_roaming=False,alert_slow_ethernet_speeds=False,alert_thermal_shutdown=False,alert_thermal_throttle=False,alert_unexpected_location=False,alerts=0i,currently_obstructed=False,direction_azimuth=-1.9601364135742188,direction_elevation=63.73470687866211,downlink_throughput_bps=7314.8662109375,fraction_obstructed=0.00020500205573625863,hardware_version=\"rev1_pre_production\",obstruction_duration=0.0,obstruction_interval=nan,pop_ping_drop_rate=0.0,pop_ping_latency_ms=31.85714340209961,raw_wedges_fraction_obstructed_0=0.0,raw_wedges_fraction_obstructed_1=0.0,raw_wedges_fraction_obstructed_10=0.0,raw_wedges_fraction_obstructed_11=0.0,raw_wedges_fraction_obstructed_2=0.0,raw_wedges_fraction_obstructed_3=0.0,raw_wedges_fraction_obstructed_4=0.0,raw_wedges_fraction_obstructed_5=0.0,raw_wedges_fraction_obstructed_6=0.0,raw_wedges_fraction_obstructed_7=0.0,raw_wedges_fraction_obstructed_8=0.0,raw_wedges_fraction_obstructed_9=0.0,seconds_to_first_nonempty_slot=0.0,software_version=\"ffbba606-958e-40c1-9668-b8f1cbf13081.uterm.release\",state=\"CONNECTED\",uplink_throughput_bps=10368.3193359375,uptime=270355i,valid_s=269986.0,wedges_fraction_obstructed_0=0.0,wedges_fraction_obstructed_1=0.0,wedges_fraction_obstructed_10=0.0,wedges_fraction_obstructed_11=0.0,wedges_fraction_obstructed_2=0.0,wedges_fraction_obstructed_3=0.0,wedges_fraction_obstructed_4=0.0,wedges_fraction_obstructed_5=0.0,wedges_fraction_obstructed_6=0.0,wedges_fraction_obstructed_7=0.0,wedges_fraction_obstructed_8=0.0,wedges_fraction_obstructed_9=0.0 1648095497': invalid number dropped=0"

I tried modifying starlink_grpc.py line 622 as follows:

    obstruction_interval = float(status.obstruction_stats.avg_prolonged_obstruction_interval_s or 0)

Alas, this didn't seem to help.

InfluxDB optional parameter to transform true->1 and false->0 ?

Hello @sparky8512,

I was curious, since InfluxDB/Grafana is more friendly to null/numeric values when it comes to graphing and value mappings in Grafana. Would it be possible to add a optional parameter to convert true to 1 and false to 0 on load into InfluxDB?

See this for an example of the issue: https://stackoverflow.com/questions/60669691/boolean-to-text-mapping-in-grafana I can create value mappings in Grafana based on int values but not string values.

Thanks!

Docker image missing generated gRPC protocol modules

As per the subject:

docker run --name='starlink-grpc-tools' ghcr.io/sparky8512/starlink-grpc-tools dump_dish_status.py

results in:

    This script requires the generated gRPC protocol modules. See README file for details.

Shouldn't the image come with these modules ready?

PS: the above happens exactly the same in both amd64 and aarch64 architectures.

Docker unable to import trasnceiver_pb2

Attempting to use the provided Docker container and output the data to influxdb. Running with these options:

docker run -d -t --name='starlink-grpc-tools' -e INFLUXDB_HOST=192.168.2.201 -e INFLUXDB_PORT=49153 -e INFLUXDB_DB=starlinkstats neurocis/starlink-grpc-tools dishStatusInflux.py -v

And this is the log output:

Traceback (most recent call last):
File "/app/dishStatusInflux.py", line 23, in
import spacex.api.device.device_pb2
File "/app/spacex/api/device/device_pb2.py", line 18, in
from spacex.api.device import transceiver_pb2 as spacex_dot_api_dot_device_dot_transceiver__pb2
ImportError: cannot import name 'transceiver_pb2' from 'spacex.api.device' (unknown location)

Help with history

Hi all, I just started writing a nice python script to mimic the Starlink web page. So far I've got everything I want working.

But now I want to look at outage history. So in my script I do this:

h = starlink_grpc.get_history()
And then look at outages:
lastout=h.outages[-1]
And I get this
cause: NO_DOWNLINK start_timestamp_ns: 1341872909960043179 duration_ns: 15259975072 did_switch: true
The timestamp when converted is off by 10 years!
n=datetime.datetime.fromtimestamp(1341877680020042191/1000000000)
and the value of n is
datetime.datetime(2012, 7, 9, 19, 48, 0, 20042)
It's only off by 10 years....
Am I missing something here?

-o option no longer keeps data across dish reboot

It looks like I broke the -o option's ability to retain data across dish reboot, which was its original purpose, when I "improved" it to add the functionality in issue #29.

This is what I get for not writing proper unit tests....

I found at least 1 defect in the code already, but I'm not sure that will actually fix it. Am still looking.

Authentication

Some gRPC calls require authentication. Do you know how this works?

Example:

$ grpcurl -plaintext -d {\"dish_get_emc\":{}} 192.168.100.1:9200 SpaceX.API.Device.Device/Handle
ERROR:
  Code: PermissionDenied
  Message:
$

Docker image crashing with no log output

Hi, I'm trying to use the docker image to write data to influxdb but no luck.

I've tried both influxdb v1 and v2; on both I get the same error - the container restarts constantly, and nothing is written to the log line. Any ideas where to start digging in to this? I'm fairly new to docker; but I might look at doing it myself so I can dump some good ol' fashioned log lines in there.

Note I've got Grafana connected to influx ok (both v1 and v2) and I know the code is running correctly, as I get error messages when things are wrong (like passwords/hostnames etc), but when everything seems right, it just restarts..

history data parsing broken due to gRPC service change

A couple days ago, a Reddit user posted that they stopped getting SNR data in the get_status response from the Starlink dish gRPC service in the latest firmware.

I just got that firmware version (3155b85a-29b0-4de0-8c5e-c23c321bf245), and as I expected, SpaceX has removed all of the fields that had been marked as deprecated in the protocol definition, which affected both get_status and get_history responses. The fields are still present in the protocol definitions, but are no longer being populated. This is treated as them being set to their default values, which is usually something like 0 for single fields and empty array for repeated fields.

For get_status this affected the snr and state fields. For get_history this affected the snr, scheduled, and obstructed fields. Because the scripts assume all the history data is the same array size as pop_ping_drop_rate (which is still being populated), anything that uses the history data is currently crashing the scripts with index out of range error.

At a minimum, the usage of those fields needs to be removed, which will result in less useful data, but at least will restore the ability to record the remaining data.

For scheduled and obstructed fields, it may be possible to reconstruct that data from the outage section of the get_history results by correlating that with pop_ping_drop_rate, but that gets a little complicated because it uses GPS time for time stamps and there is no indication of what exact time stamp the pop_ping_drop_rate data represents. GPS time can be converted to UTC time, but that would introduce another potential point of failure due to the need to know how many leap seconds they differ by. Still doable, but messy, and I'm not sure I want to go down that path when there is no guarantee they won't just change the service again.

Docker image doesn't contain dish_grpc_influx2.py

I am moving over to InfluxDB v2 and ran into an issue.

I get this error when using the new docker image located here: ghcr.io/sparky8512/starlink-grpc-tools

/usr/local/bin/python3: can't open file '/app/dish_grpc_influx2.py': [Errno 2] No such file or directory

It looks like the docker image was last updated 2 months ago. Yet the influx v2 code was added after the last docker image update.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.