Giter VIP home page Giter VIP logo

hubble's Introduction

Hubble Logo

License


Network, Service & Security Observability for Kubernetes

What is Hubble?

Hubble is a fully distributed networking and security observability platform for cloud native workloads. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner.

Hubble can answer questions such as:

Service dependencies & communication map:

  • What services are communicating with each other? How frequently? What does the service dependency graph look like?
  • What HTTP calls are being made? What Kafka topics does a service consume from or produce to?

Operational monitoring & alerting:

  • Is any network communication failing? Why is communication failing? Is it DNS? Is it an application or network problem? Is the communication broken on layer 4 (TCP) or layer 7 (HTTP)?
  • Which services have experienced a DNS resolution problems in the last 5 minutes? Which services have experienced an interrupted TCP connection recently or have seen connections timing out? What is the rate of unanswered TCP SYN requests?

Application monitoring:

  • What is the rate of 5xx or 4xx HTTP response codes for a particular service or across all clusters?
  • What is the 95th and 99th percentile latency between HTTP requests and responses in my cluster? Which services are performing the worst? What is the latency between two services?

Security observability:

  • Which services had connections blocked due to network policy? What services have been accessed from outside the cluster? Which services have resolved a particular DNS name?

Why Hubble?

The Linux kernel technology eBPF is enabling visibility into systems and applications at a granularity and efficiency that was not possible before. It does so in a completely transparent way, without requiring the application to change or for the application to hide information. By building on top of Cilium, Hubble can leverage eBPF for visibility. By leveraging eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed insight where required. Hubble has been created and specifically designed to make best use of these new eBPF powers.

Releases

The Hubble CLI is backward compatible with all supported Cilium releases. For this reason, only the latest Hubble CLI version is maintained.

Version Release Date Maintained Supported Cilium Version Artifacts
v0.13 2024-04-18 (v0.13.3) Yes Cilium 1.15 and older GitHub Release
v0.12 2023-12-08 (v0.12.3) No Cilium 1.14 and older GitHub Release
v0.11 2023-06-07 (v0.11.6) No Cilium 1.13 and older GitHub Release

Component Stability

Hubble project consists of several components (see Architecture section).

While the core Hubble components have been running in production in multiple environments, new components continue to emerge as the project grows and expands in scope.

Some components, due to their relatively young age, are still considered beta and have to be used with caution in critical production workloads.

Component Area State
Hubble CLI Core Stable
Hubble Server Core Stable
Hubble Metrics Core Stable
Hubble Relay Multinode Stable
Hubble UI UI Beta

Architecture

Hubble Architecture

Getting Started

Features

Service Dependency Graph

Troubleshooting microservices application connectivity is a challenging task. Simply looking at "kubectl get pods" does not indicate dependencies between each service or external APIs or databases.

Hubble enables zero-effort automatic discovery of the service dependency graph for Kubernetes Clusters at L3/L4 and even L7, allowing user-friendly visualization and filtering of those dataflows as a Service Map.

See Hubble Service Map Tutorial for more examples.

Service Map

Metrics & Monitoring

The metrics and monitoring functionality provides an overview of the state of systems and allow to recognize patterns indicating failure and other scenarios that require action. The following is a short list of example metrics, for a more detailed list of examples, see the Metrics Documentation.

Networking Behavior

Networking

Network Policy Observation

Network Policy

HTTP Request/Response Rate & Latency

HTTP

DNS Request/Response Monitoring

DNS

Flow Visibility

Flow visibility provides visibility into flow information on the network and application protocol level. This enables visibility into individual TCP connections, DNS queries, HTTP requests, Kafka communication, and much more.

DNS Resolution

Identifying pods which have received DNS response indicating failure:

hubble observe --since=1m -t l7 -o json \
   | jq 'select(.l7.dns.rcode==3) | .destination.namespace + "/" + .destination.pod_name' \
   | sort | uniq -c | sort -r
  42 "starwars/jar-jar-binks-6f5847c97c-qmggv"

Successful query & response:

starwars/x-wing-bd86d75c5-njv8k            kube-system/coredns-5c98db65d4-twwdg      DNS Query deathstar.starwars.svc.cluster.local. A
kube-system/coredns-5c98db65d4-twwdg       starwars/x-wing-bd86d75c5-njv8k           DNS Answer "10.110.126.213" TTL: 3 (Query deathstar.starwars.svc.cluster.local. A)

Non-existent domain:

starwars/jar-jar-binks-789c4b695d-ltrzm    kube-system/coredns-5c98db65d4-f4m8n      DNS Query unknown-galaxy.svc.cluster.local. A
starwars/jar-jar-binks-789c4b695d-ltrzm    kube-system/coredns-5c98db65d4-f4m8n      DNS Query unknown-galaxy.svc.cluster.local. AAAA
kube-system/coredns-5c98db65d4-twwdg       starwars/jar-jar-binks-789c4b695d-ltrzm   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy.starwars.svc.cluster.local. A)
kube-system/coredns-5c98db65d4-twwdg       starwars/jar-jar-binks-789c4b695d-ltrzm   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy.starwars.svc.cluster.local. AAAA)

HTTP Protocol

Successful request & response with latency information:

starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    HTTP/1.1 GET http://deathstar/
starwars/deathstar-695d8f7ddc-lvj84:80     starwars/x-wing-bd86d75c5-njv8k:53410     HTTP/1.1 200 1ms (GET http://deathstar/)

TCP/UDP Packets

Successful TCP connection:

starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    TCP Flags: SYN
deathstar.starwars.svc.cluster.local:80    starwars/x-wing-bd86d75c5-njv8k:53410     TCP Flags: SYN, ACK
starwars/x-wing-bd86d75c5-njv8k:53410      starwars/deathstar-695d8f7ddc-lvj84:80    TCP Flags: ACK, FIN
deathstar.starwars.svc.cluster.local:80    starwars/x-wing-bd86d75c5-njv8k:53410     TCP Flags: ACK, FIN

Connection timeout:

starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN
starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN
starwars/r2d2-6694d57947-xwhtz:60948   deathstar.starwars.svc.cluster.local:8080     TCP Flags: SYN

Network Policy Behavior

Denied connection attempt:

starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN
starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN
starwars/enterprise-5775b56c4b-thtwl:37800   starwars/deathstar-695d8f7ddc-lvj84:80(http)   Policy denied (L3)   TCP Flags: SYN

Specifying Raw Flow Filters

Hubble supports extensive set of filtering options that can be specified as a combination of allowlist and denylist. Hubble applies these filters as follows:

for each flow:
  if flow does not match any of the allowlist filters:
    continue
  if flow matches any of the denylist filters:
    continue
  send flow to client

You can pass these filters to hubble observe command as JSON-encoded FlowFilters. For example, to observe flows that match the following conditions:

  • Either the source or destination identity contains k8s:io.kubernetes.pod.namespace=kube-system or reserved:host label, AND

  • Neither the source nor destination identity contains k8s:k8s-app=kube-dns label:

    hubble observe \
      --allowlist '{"source_label":["k8s:io.kubernetes.pod.namespace=kube-system","reserved:host"]}' \
      --allowlist '{"destination_label":["k8s:io.kubernetes.pod.namespace=kube-system","reserved:host"]}' \
      --denylist '{"source_label":["k8s:k8s-app=kube-dns"]}' \
      --denylist '{"destination_label":["k8s:k8s-app=kube-dns"]}'
    

Alternatively, you can also specify these flags as HUBBLE_{ALLOWLIST,DENYLIST} environment variables:

cat > allowlist.txt <<EOF
{"source_label":["k8s:io.kubernetes.pod.namespace=kube-system","reserved:host"]}
{"destination_label":["k8s:io.kubernetes.pod.namespace=kube-system","reserved:host"]}
EOF

cat > denylist.txt <<EOF
{"source_label":["k8s:k8s-app=kube-dns"]}
{"destination_label":["k8s:k8s-app=kube-dns"]}
EOF

HUBBLE_ALLOWLIST=$(cat allowlist.txt)
HUBBLE_DENYLIST=$(cat denylist.txt)
export HUBBLE_ALLOWLIST
export HUBBLE_DENYLIST

hubble observe

Note that --allowlist and --denylist filters get included in the request in addition to regular flow filters like --label or --namespace. Use --print-raw-filters flag to verify the exact filters that the Hubble CLI generates. For example:

% hubble observe --print-raw-filters \
    -t drop \
    -n kube-system \
    --not --label "k8s:k8s-app=kube-dns" \
    --allowlist '{"source_label":["k8s:k8s-app=my-app"]}'
allowlist:
- '{"source_pod":["kube-system/"],"event_type":[{"type":1}]}'
- '{"destination_pod":["kube-system/"],"event_type":[{"type":1}]}'
- '{"source_label":["k8s:k8s-app=my-app"]}'
denylist:
- '{"source_label":["k8s:k8s-app=kube-dns"]}'
- '{"destination_label":["k8s:k8s-app=kube-dns"]}'

The output YAML can be saved to a file and passed to hubble observe command with --config flag. For example:

% hubble observe --print-raw-filters --allowlist '{"source_label":["k8s:k8s-app=my-app"]}' > filters.yaml
% hubble observe --config ./filters.yaml

Community

Join the Cilium Slack #hubble channel to chat with Cilium Hubble developers and other Cilium / Hubble users. This is a good place to learn about Hubble and Cilium, ask questions, and share your experiences.

Learn more about Cilium.

Authors

Hubble is an open source project licensed under the Apache License. Everybody is welcome to contribute. The project is following the Governance Rules of the Cilium project. See CONTRIBUTING for instructions on how to contribute and details of the Code of Conduct.

hubble's People

Contributors

awesomepatrol avatar bmcustodio avatar chancez avatar christarazi avatar chrsmark avatar dependabot[bot] avatar dsexton avatar gandro avatar geakstr avatar glibsm avatar glrf avatar joestringer avatar jrajahalme avatar kaworu avatar lambdanis avatar meyskens avatar michi-covalent avatar pchaigno avatar raphink avatar renovate[bot] avatar rolinh avatar sayboras avatar sharjeelaziz avatar simar7 avatar slayer321 avatar tgraf avatar tklauser avatar twpayne avatar vadorovsky avatar zhiyanfoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hubble's Issues

Remove endpoint history from endpoint cache

The current endpoint cache in github.com/cilium/hubble/pkg/api/v1.Endpoints contains a few artifacts from early versions of Hubble where flows were decoded late, i.e. when a gRPC flow requested them. This meant that the endpoint cache had to keep "historic" data about endpoints around, we do this by tracking its deletion time:

Deleted *time.Time `json:"deleted"`

Nowadays however, endpoint information is annotated as soon as a flow is observed, making it pointless to keep deleted endpoints around.

We should refactor the endpoint cache and get rid of the "deleted endpoints" functionality, as it complicates the code (every getter has to check that endpoints are not deleted). This also gives us the opportunity to fix a few other issues with the current package, such as:

  • Its location in api/v1, even though it is not really related to the gRPC interface anymore. It should probably look more similar to the fqdn/ip/service caches.
  • The potential footguns it has with regards to concurrency, i.e. it returns pointers whose access is supposed to see synchronized (see #74 (comment))

Hubble UI cannot render due to Error: unable to get issuer certificate

Screen Shot 2020-02-21 at 10 04 54 AM

We cannot render the hubble-ui due to this below error message:

"message":"Can't fetch namespaces via k8s api: Error: unable to get issuer certificate","locations":[{"line":4,"column":7}],"path":["viewer","clusters"],"extensions":{"code":"INTERNAL_SERVER_ERROR"}}

{
name: 'inCluster',
caFile: '/var/run/secrets/kubernetes.io/serviceaccount/ca.crt',
server: 'https://10.110.121.43:443',
skipTLSVerify: false
}

hubble observe --since does not terminate

Using hubble observe --since T without any --until filter does not terminate automatically anymore, presumably because the server does not close the response and waits for new flows to arrive. This happens until the user terminates the request by presses CTRL+C.

This behavior should only occur if --follow is passed. If --follow is omitted, the server should close the connection itself if it returned all available flows.

--follow doesn't work with --dict

hubble observe --follow --dict should follow in dictionary mode, yet it incorrectly defaults to compact.

--follow --json works correctly and follows in json

make DefaultSocketPath configurable

similar to #122, make the default socket path configurable so that we can change it to something like unix:///var/run/cilium/hubble.sock in embedded mode. this way the observe command works without the --server argument.

Hubble DNS visibility only works in one direction

What is this @michi-covalent

$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.6.5/examples/minikube/http-sw-app.yaml
... wait for pods ...
$ kubectl annotate pod tiefighter io.cilium.proxy-visibility="<Egress/53/UDP/DNS>,<Egress/80/TCP/HTTP>"
$ kubectl get ciliumendpoint tiefighter
NAME         ENDPOINT ID   IDENTITY ID   INGRESS ENFORCEMENT   EGRESS ENFORCEMENT   VISIBILITY POLICY   ENDPOINT STATE   IPV4         IPV6
tiefighter   3441          52622         false                 false                OK                  ready            10.1.172.5
# hubble observe -n default -f
Feb  1 00:00:56.329 [allosaurus]: default/tiefighter:57372 -> kube-system/coredns-7b67f9f8c-5r8g2:53(domain) to-proxy FORWARDED (UDP)
Feb  1 00:00:56.329 [allosaurus]: default/tiefighter:57372 -> kube-system/coredns-7b67f9f8c-5r8g2:53(domain) to-proxy FORWARDED (UDP)
Feb  1 00:00:56.329 [allosaurus]: default/tiefighter:57372 -> kube-system/coredns-7b67f9f8c-5r8g2:53(domain) dns-request FORWARDED (DNS Query yahoo.com. AAAA)
Feb  1 00:00:56.329 [allosaurus]: default/tiefighter:57372 -> kube-system/coredns-7b67f9f8c-5r8g2:53(domain) dns-request FORWARDED (DNS Query yahoo.com. A)
Feb  1 00:00:56.341 [allosaurus]: kube-system/coredns-7b67f9f8c-5r8g2:53(domain) -> default/tiefighter:57372 dns-response FORWARDED (DNS Answer "2001:4998:c:1023::4,2001:4998:58:1836::11,2001:4998:44:41d::3,2001:4998:58:1836::10,2001:4998:44:41d::4,2001:4998:c:1023::5" TTL: 30 (Query yahoo.com. AAAA))
Feb  1 00:00:56.341 [allosaurus]: kube-system/coredns-7b67f9f8c-5r8g2:53(domain) -> default/tiefighter:57372 dns-response FORWARDED (DNS Answer "98.138.219.231,72.30.35.9,72.30.35.10,98.137.246.7,98.137.246.8,98.138.219.232" TTL: 30 (Query yahoo.com. A))
Feb  1 00:00:56.343 [allosaurus]: kube-system/coredns-7b67f9f8c-5r8g2:53(domain) -> default/tiefighter:57372 to-endpoint FORWARDED (UDP)
Feb  1 00:00:56.343 [allosaurus]: kube-system/coredns-7b67f9f8c-5r8g2:53(domain) -> default/tiefighter:57372 to-endpoint FORWARDED (UDP)
Feb  1 00:00:56.343 [allosaurus]: default/tiefighter:38662 -> 98.138.219.231:80(http) to-proxy FORWARDED (TCP Flags: SYN)
Feb  1 00:00:56.343 [allosaurus]: yahoo.com:80(http) -> default/tiefighter:38662 to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Feb  1 00:00:56.343 [allosaurus]: default/tiefighter:38662 -> 98.138.219.231:80(http) to-proxy FORWARDED (TCP Flags: ACK)
Feb  1 00:00:56.343 [allosaurus]: default/tiefighter:38662 -> 98.138.219.231:80(http) to-proxy FORWARDED (TCP Flags: ACK, PSH)
Feb  1 00:00:56.342 [allosaurus]: default/tiefighter:38662 -> 98.138.219.231:80(http) http-request FORWARDED (HTTP/1.1 GET http://yahoo.com/)
Feb  1 00:00:56.448 [allosaurus]: yahoo.com:80(http) -> default/tiefighter:38662 to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Feb  1 00:00:56.448 [allosaurus]: default/tiefighter:38662 -> 98.138.219.231:80(http) to-proxy FORWARDED (TCP Flags: ACK, FIN)
Feb  1 00:00:56.449 [allosaurus]: yahoo.com:80(http) -> default/tiefighter:38662 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Feb  1 00:00:56.449 [allosaurus]: default/tiefighter:38662 -> 98.138.219.231:80(http) to-proxy FORWARDED (TCP Flags: ACK)
Feb  1 00:00:56.448 [allosaurus]: 98.138.219.231:80(http) -> default/tiefighter:38662 http-response FORWARDED (HTTP/1.1 301 105ms (GET http://yahoo.com/))
^Cbash-5.0# 

Translate service IPs to service names

Similar to how we map IP addresses to domain and pod names, we should also map service IPs to the Kubernetes service name. The necessary changes in Cilium have already been done ( cilium/cilium#9554 cilium/cilium#9574), what still remains are the necessary changes in Hubble.

Suggested steps:

  1. Extend our Cilium client to fetch the services from the Cilium API
  2. Initialize a service cache from the Cilium API (/v1/service/)
  3. Parse AgentNotifyService{Upserted,Deleted} messages (similarly to how IPCache notifications work, see serve.go and ipcache.go), in-cooperate updates into the service cache
  4. Use the service cache in both the threefour and seven parser to populate the SourceService or DestinationService field
  5. Extend printer to display the service name in the CLI

Useless node name in CLI output

The node name in the output is pretty useless when operating the CLI in a single node context and only steals space.

ks exec -ti hubble-gt7m2 -- hubble observe -t drop --follow
Feb  9 13:28:21.965 [ip-192-168-12-253.us-west-2.compute.internal]: default/pod-to-a-l3-denied-cnp-65fd99db76-2knrs:51104 -> default/echo-a-7f74df756d-s4b9p:80(http) Policy denied (L3) DROPPED (TCP Flags: SYN)
Feb  9 13:28:22.798 [ip-192-168-12-253.us-west-2.compute.internal]: fe80::3042:1dff:fe0c:17df -> ff02::2 Unsupported L3 protocol DROPPED (ICMPv6 RouterSolicitation)
Feb  9 13:28:23.743 [ip-192-168-12-253.us-west-2.compute.internal]: default/pod-to-a-l3-denied-cnp-65fd99db76-2knrs:51112 -> default/echo-a-7f74df756d-s4b9p:80(http) Policy denied (L3) DROPPED (TCP Flags: SYN)
Feb  9 13:28:23.982 [ip-192-168-12-253.us-west-2.compute.internal]: default/pod-to-a-l3-denied-cnp-65fd99db76-2knrs:51104 -> default/echo-a-7f74df756d-s4b9p:80(http) Policy denied (L3) DROPPED (TCP Flags: SYN)
Feb  9 13:28:24.749 [ip-192-168-12-253.us-west-2.compute.internal]: default/pod-to-a-l3-denied-cnp-65fd99db76-2knrs:51112 -> default/echo-a-7f74df756d-s4b9p:80(http) Policy denied (L3) DROPPED (TCP Flags: SYN)
Feb  9 13:28:26.765 [ip-192-168-12-253.us-west-2.compute.internal]: default/pod-to-a-l3-denied-cnp-65fd99db76-2knrs:51112 -> default/echo-a-7f74df756d-s4b9p:80(http) Policy denied (L3) DROPPED (TCP Flags: SYN)
Feb  9 13:28:30.222 [ip-192-168-12-253.us-west-2.compute.internal]: fe80::3042:1dff:fe0c:17df -> ff02::2 Unsupported L3 protocol DROPPED (ICMPv6 RouterSolicitation)

Skip output port if port is unknown

Example of a DNS L7 flow:

Nov 20 17:44:04.814   kube-system/coredns-5c98db65d4-twwdg:0      starwars/jar-jar-binks-59cdcc8dc4-hxl6w:0   dns-response   FORWARDED   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Query unknown-galaxy. A)

Printing :0 does not add any value and only uses up space.

Add TODOs Badge

Hi there! I wanted to propose adding the following badge to the README to indicate how many TODO comments are in this codebase:

TODOs

The badge links to tickgit.com which is a free service that indexes and displays TODO comments in public github repos. It can help surface latent work and be a way for contributors to find areas of code to improve, that might not be otherwise documented.

The markdown is:

[![TODOs](https://img.shields.io/endpoint?url=https://api.tickgit.com/badge?repo=github.com/cilium/hubble)](https://www.tickgit.com/browse?repo=github.com/cilium/hubble)

Thanks for considering, feel free to close this issue if it's not appropriate or you prefer not to!

Define an interface to access Endpoint fields

so that hubble and cilium (for embedded mode) can define the interface independently for endpoint getter. it needs access to the following fields:

  • ID
  • PodName
  • PodNamespace
  • Labels

service name fields missing

i'm using minikube with docker.io/cilium/cilium:latest and quay.io/cilium/hubble:latest. i don't see the service name fields set when i run curl hubble-ui-svc:12000 from the hubble pod. here is the output from hubble observe --server localhost:50051 -n default --port 12000 -f --json:

flows.txt

parser: Use ipcache to populate labels

The parser currently fails to annotate endpoint labels for endpoints running on remote nodes, this is due to the parser relying only on Cilium's trace information and local endpoint list to populate labels.

We should make the parser to additionally consult the security identity information is available in the ipcache. This would allow us to annotate labels for all known IP addresses.

docker build failing

% docker build .
Sending build context to Docker daemon  77.13MB
Step 1/10 : FROM docker.io/library/golang:1.12.8-alpine3.10 as builder
 ---> 5b92ed72e216
Step 2/10 : WORKDIR /go/src/github.com/cilium/hubble
 ---> Using cache
 ---> 803a6b64b66d
Step 3/10 : RUN apk add --no-cache binutils git make  && go get -d github.com/google/gops  && cd /go/src/github.com/google/gops  && git checkout -b v0.3.6 v0.3.6  && go install  && strip /go/bin/gops
 ---> Using cache
 ---> 300fc80b6d93
Step 4/10 : COPY . .
 ---> 90be6e428192
Step 5/10 : RUN make clean && make hubble
 ---> Running in 653ebfb40c6c
rm -f hubble
go build -mod=vendor -o hubble
build flag -mod=vendor only valid when using modules
make: *** [Makefile:11: hubble] Error 1
The command '/bin/sh -c make clean && make hubble' returned a non-zero code: 2

Consider reversing output when using `--since`

Hubble currently output flows with the latest flow first which requires to read request and response interactions from bottom to top. This seems unnatural.

We should consider reversing the order in which flows are being printed.

servicemap for native system services

Hi,

Is it possible to use hubble for discover the service map for native system service (e.g service restarted by system from a systemd unit) or it works only for service deployed by k8s?
The user case I'm after is to run bubble on the hosts of a kubernetes cluster, and use it to discover the service map and monitoring the connection between critical system services:

It looks like this:

# log on a host
# hubble discover   
haproxy:api_server -> apiserver -> etcd:3000
fluent-bit -> fluentd -> Elastcisearch 
# hubble monitor 
error: connection between fluent-bit and fluentd is down.

hubble pods get FailedCreatePodSandBox event can not deploy in successed.

I follow the document address below and there are some issues what I hit. https://github.com/cilium/hubble/blob/master/Documentation/metrics.md

Events:
  Type     Reason                  Age                  From               Message
  ----     ------                  ----                 ----               -------
  Normal   Scheduled               9m32s                default-scheduler  Successfully assigned kube-system/hubble-mz69h to worker3
  Warning  FailedCreatePodSandBox  7m29s                kubelet, worker3   Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3c82cfa8ee338c56dec1dea6d870e02d4f04ddc00b837371255d733a13a296f9" network for pod "hubble-mz69h": NetworkPlugin cni failed to set up pod "hubble-mz69h_kube-system" network: Unable to create endpoint: Put http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0: context deadline exceeded
  Warning  FailedCreatePodSandBox  5m30s                kubelet, worker3   Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "589433128cfde76e1c308b8294e994591135da4c43e1e5c532dc08bca56c0091" network for pod "hubble-mz69h": NetworkPlugin cni failed to set up pod "hubble-mz69h_kube-system" network: Unable to create endpoint: Put http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0: context deadline exceeded
  Warning  FailedCreatePodSandBox  3m33s                kubelet, worker3   Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "937ea4b3a5ccac256e8f55112e15430bd7a64d7ea46d91efb85816982881cc6a" network for pod "hubble-mz69h": NetworkPlugin cni failed to set up pod "hubble-mz69h_kube-system" network: Unable to create endpoint: Put http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0: context deadline exceeded
  Normal   SandboxChanged          89s (x4 over 7m29s)  kubelet, worker3   Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  89s                  kubelet, worker3   Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f0f6819f18a2fcd5bb7affa38d7934d13bd7789ca7cbbbda7110924651a0bc10" network for pod "hubble-mz69h": NetworkPlugin cni failed to set up pod "hubble-mz69h_kube-system" network: Unable to create endpoint: Put http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0: context deadline exceeded

cilium pods' logs

evel=warning msg="BPF program is too large. Processed 131073 insn" subsys=datapath-loader
level=warning subsys=datapath-loader
level=warning msg="Error fetching program/map!" subsys=datapath-loader
level=warning msg="Unable to load program" subsys=datapath-loader
level=warning msg="JoinEP: Failed to load program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 error="Failed to load tc filter: exit status 1" file-path=353_next/bpf_lxc.o identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ subsys=datapath-loader veth=lxc_health
level=error msg="Error while rewriting endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 error="Failed to load tc filter: exit status 1" identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ subsys=endpoint
level=warning msg="generating BPF for endpoint failed, keeping stale directory." containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 file-path=353_next_fail identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ subsys=endpoint
level=warning msg="Regeneration of endpoint failed" bpfCompilation=0s bpfLoadProg=20.66026321s bpfWaitForELF="5.563µs" bpfWriteELF="200.291µs" buildDuration=20.664940711s containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 error="Failed to load tc filter: exit status 1" identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ mapSync="3.056µs" policyCalculation="4.679µs" prepareBuild="299.887µs" proxyConfiguration="9.603µs" proxyPolicyCalculation="16.252µs" proxyWaitForAck=0s reason="retrying regeneration" subsys=endpoint waitingForCTClean=3.159062ms waitingForLock=749ns
level=error msg="endpoint regeneration failed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 error="Failed to load tc filter: exit status 1" identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ subsys=endpoint
level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=a508b3ed4c controller="sync-to-k8s-ciliumendpoint (3544)" datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3544 identity=104 ipv4=10.42.1.235 ipv6= k8sPodName=kube-system/coredns-799dffd9c4-pvctw subsys=endpointsynchronizer
level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=a508b3ed4c controller="sync-to-k8s-ciliumendpoint (3544)" datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3544 identity=104 ipv4=10.42.1.235 ipv6= k8sPodName=kube-system/coredns-799dffd9c4-pvctw subsys=endpointsynchronizer
level=error msg="Command execution failed" cmd="[tc filter replace dev lxc_health ingress prio 1 handle 1 bpf da obj 353_next/bpf_lxc.o sec from-container]" error="exit status 1" subsys=datapath-loader
level=warning subsys=datapath-loader
level=warning msg="Prog section 'from-container' rejected: Argument list too long (7)!" subsys=datapath-loader
level=warning msg=" - Type:         3" subsys=datapath-loader
level=warning msg=" - Attach Type:  0" subsys=datapath-loader
level=warning msg=" - Instructions: 2626 (0 over limit)" subsys=datapath-loader
level=warning msg=" - License:      GPL" subsys=datapath-loader
level=warning subsys=datapath-loader
level=warning msg="Verifier analysis:" subsys=datapath-loader
level=warning subsys=datapath-loader
level=warning msg="Skipped 10968082 bytes, use 'verb' option for the full verbose log." subsys=datapath-loader
level=warning msg="[...]" subsys=datapath-loader
level=warning msg="7 +0)" subsys=datapath-loader
level=warning msg=" R0=map_value(id=0,off=0,ks=2,vs=8,imm=0) R1_w=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff)) R6=ctx(id=0,off=0,imm=0) R7=map_value(id=0,off=0,ks=12,vs=12,imm=0) R8=inv0 R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256_w=map_value" subsys=datapath-loader
level=warning msg="656: (7b) *(u64 *)(r10 -88) = r1" subsys=datapath-loader
level=warning msg="657: (b7) r1 = 0" subsys=datapath-loader
level=warning msg="658: (7b) *(u64 *)(r10 -56) = r1" subsys=datapath-loader
level=warning msg="659: (7b) *(u64 *)(r10 -64) = r1" subsys=datapath-loader
level=warning msg="660: (7b) *(u64 *)(r10 -48) = r1" subsys=datapath-loader
level=warning msg="661: (7b) *(u64 *)(r10 -72) = r1" subsys=datapath-loader
level=warning msg="662: (7b) *(u64 *)(r10 -80) = r1" subsys=datapath-loader
level=warning msg="663: (7b) *(u64 *)(r10 -96) = r1" subsys=datapath-loader
level=warning msg="664: (6b) *(u16 *)(r10 -56) = r1" subsys=datapath-loader
level=warning msg="665: (b7) r7 = 1" subsys=datapath-loader
level=warning msg="666: (71) r2 = *(u8 *)(r10 -100)" subsys=datapath-loader
level=warning msg="667: (15) if r2 == 0x6 goto pc+1" subsys=datapath-loader
level=warning msg=" R0=map_value(id=0,off=0,ks=2,vs=8,imm=0) R1=inv0 R2=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) R6=ctx(id=0,off=0,imm=0) R7=inv1 R8=inv0 R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-64=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="668: (b7) r7 = 0" subsys=datapath-loader
level=warning msg="669: (79) r1 = *(u64 *)(r10 -208)" subsys=datapath-loader
level=warning msg="670: (bf) r3 = r1" subsys=datapath-loader
level=warning msg="671: (67) r3 <<= 4" subsys=datapath-loader
level=warning msg="672: (57) r3 &= 32" subsys=datapath-loader
level=warning msg="673: (67) r1 <<= 3" subsys=datapath-loader
level=warning msg="674: (57) r1 &= 8" subsys=datapath-loader
level=warning msg="675: (4f) r1 |= r3" subsys=datapath-loader
level=warning msg="676: (b7) r8 = 60" subsys=datapath-loader
level=warning msg="677: (79) r3 = *(u64 *)(r10 -200)" subsys=datapath-loader
level=warning msg="678: (6b) *(u16 *)(r10 -58) = r3" subsys=datapath-loader
level=warning msg="679: (6b) *(u16 *)(r10 -60) = r1" subsys=datapath-loader
level=warning msg="680: (67) r7 <<= 1" subsys=datapath-loader
level=warning msg="681: (55) if r2 != 0x6 goto pc+9" subsys=datapath-loader
level=warning msg=" R0=map_value(id=0,off=0,ks=2,vs=8,imm=0) R1=inv0 R2=inv6 R3=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=inv60 R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="682: (bf) r2 = r7" subsys=datapath-loader
level=warning msg="683: (67) r2 <<= 3" subsys=datapath-loader
level=warning msg="684: (a7) r2 ^= 16" subsys=datapath-loader
level=warning msg="685: (57) r2 &= 248" subsys=datapath-loader
level=warning msg="686: (b7) r8 = 60" subsys=datapath-loader
level=warning msg="687: (15) if r2 == 0x0 goto pc+1" subsys=datapath-loader
level=warning msg=" R0=map_value(id=0,off=0,ks=2,vs=8,imm=0) R1=inv0 R2=inv(id=0,umax_value=248,var_off=(0x0; 0xf8)) R3=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=inv60 R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="688: (b7) r8 = 21600" subsys=datapath-loader
level=warning msg="689: (4f) r1 |= r2" subsys=datapath-loader
level=warning msg="690: (6b) *(u16 *)(r10 -60) = r1" subsys=datapath-loader
level=warning msg="691: (85) call bpf_ktime_get_ns#5" subsys=datapath-loader
level=warning msg="692: (37) r0 /= 1000000000" subsys=datapath-loader
level=warning msg="693: (0f) r8 += r0" subsys=datapath-loader
level=warning msg="694: (63) *(u32 *)(r10 -64) = r8" subsys=datapath-loader
level=warning msg="695: (71) r3 = *(u8 *)(r10 -54)" subsys=datapath-loader
level=warning msg="696: (61) r2 = *(u32 *)(r10 -48)" subsys=datapath-loader
level=warning msg="697: (bf) r1 = r3" subsys=datapath-loader
level=warning msg="698: (4f) r1 |= r7" subsys=datapath-loader
level=warning msg="699: (bf) r4 = r1" subsys=datapath-loader
level=warning msg="700: (57) r4 &= 255" subsys=datapath-loader
level=warning msg="701: (5d) if r3 != r4 goto pc+7" subsys=datapath-loader
level=warning msg=" R0=inv(id=0) R1=inv0 R2=inv0 R3=inv0 R4=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=inv(id=0) R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="702: (07) r2 += 5" subsys=datapath-loader
level=warning msg="703: (bf) r3 = r0" subsys=datapath-loader
level=warning msg="704: (67) r3 <<= 32" subsys=datapath-loader
level=warning msg="705: (77) r3 >>= 32" subsys=datapath-loader
level=warning msg="706: (67) r2 <<= 32" subsys=datapath-loader
level=warning msg="707: (77) r2 >>= 32" subsys=datapath-loader
level=warning msg="708: (3d) if r2 >= r3 goto pc+2" subsys=datapath-loader
level=warning msg=" R0=inv(id=0) R1=inv0 R2=inv5 R3=inv(id=0,umin_value=6,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R4=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=inv(id=0) R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="709: (73) *(u8 *)(r10 -54) = r1" subsys=datapath-loader
level=warning msg="710: (63) *(u32 *)(r10 -48) = r0" subsys=datapath-loader
level=warning msg="711: (b7) r7 = 0" subsys=datapath-loader
level=warning msg="712: (63) *(u32 *)(r10 -52) = r7" subsys=datapath-loader
level=warning msg="713: (bf) r2 = r10" subsys=datapath-loader
level=warning msg="714: (07) r2 += -112" subsys=datapath-loader
level=warning msg="715: (bf) r3 = r10" subsys=datapath-loader
level=warning msg="716: (07) r3 += -96" subsys=datapath-loader
level=warning msg="717: (79) r8 = *(u64 *)(r10 -224)" subsys=datapath-loader
level=warning msg="718: (bf) r1 = r8" subsys=datapath-loader
level=warning msg="719: (b7) r4 = 0" subsys=datapath-loader
level=warning msg="720: (85) call bpf_map_update_elem#2" subsys=datapath-loader
level=warning msg="721: (67) r0 <<= 32" subsys=datapath-loader
level=warning msg="722: (c7) r0 s>>= 32" subsys=datapath-loader
level=warning msg="723: (65) if r0 s> 0xffffffff goto pc+755" subsys=datapath-loader
level=warning msg=" R0=inv(id=0,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=map_ptr(id=0,off=0,ks=14,vs=56) R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="724: (79) r1 = *(u64 *)(r10 -216)" subsys=datapath-loader
level=warning msg="725: (73) *(u8 *)(r10 -99) = r1" subsys=datapath-loader
level=warning msg="BPF program is too large. Processed 131073 insn" subsys=datapath-loader
level=warning subsys=datapath-loader
level=warning msg="Error fetching program/map!" subsys=datapath-loader
level=warning msg="Unable to load program" subsys=datapath-loader
level=warning msg="JoinEP: Failed to load program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 error="Failed to load tc filter: exit status 1" file-path=353_next/bpf_lxc.o identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ subsys=datapath-loader veth=lxc_health
level=error msg="Error while rewriting endpoint BPF program" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 error="Failed to load tc filter: exit status 1" identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ subsys=endpoint
level=warning msg="generating BPF for endpoint failed, keeping stale directory." containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 file-path=353_next_fail identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ subsys=endpoint
level=warning msg="Regeneration of endpoint failed" bpfCompilation=0s bpfLoadProg=20.277220473s bpfWaitForELF="6.511µs" bpfWriteELF="216.031µs" buildDuration=20.281390599s containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 error="Failed to load tc filter: exit status 1" identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ mapSync="2.889µs" policyCalculation="6.59µs" prepareBuild="322.732µs" proxyConfiguration="9.548µs" proxyPolicyCalculation="25.878µs" proxyWaitForAck=0s reason="retrying regeneration" subsys=endpoint waitingForCTClean=2.611944ms waitingForLock=931ns
level=error msg="endpoint regeneration failed" containerID= datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=353 error="Failed to load tc filter: exit status 1" identity=4 ipv4=10.42.1.114 ipv6= k8sPodName=/ subsys=endpoint
level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=a508b3ed4c controller="sync-to-k8s-ciliumendpoint (3544)" datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3544 identity=104 ipv4=10.42.1.235 ipv6= k8sPodName=kube-system/coredns-799dffd9c4-pvctw subsys=endpointsynchronizer
level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=a508b3ed4c controller="sync-to-k8s-ciliumendpoint (3544)" datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3544 identity=104 ipv4=10.42.1.235 ipv6= k8sPodName=kube-system/coredns-799dffd9c4-pvctw subsys=endpointsynchronizer
level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=a508b3ed4c controller="sync-to-k8s-ciliumendpoint (3544)" datapathPolicyRevision=0 desiredPolicyRevision=1 endpointID=3544 identity=104 ipv4=10.42.1.235 ipv6= k8sPodName=kube-system/coredns-799dffd9c4-pvctw subsys=endpointsynchronizer
level=error msg="Command execution failed" cmd="[tc filter replace dev lxc_health ingress prio 1 handle 1 bpf da obj 353_next/bpf_lxc.o sec from-container]" error="exit status 1" subsys=datapath-loader
level=warning subsys=datapath-loader
level=warning msg="Prog section 'from-container' rejected: Argument list too long (7)!" subsys=datapath-loader
level=warning msg=" - Type:         3" subsys=datapath-loader
level=warning msg=" - Attach Type:  0" subsys=datapath-loader
level=warning msg=" - Instructions: 2626 (0 over limit)" subsys=datapath-loader
level=warning msg=" - License:      GPL" subsys=datapath-loader
level=warning subsys=datapath-loader
level=warning msg="Verifier analysis:" subsys=datapath-loader
level=warning subsys=datapath-loader
level=warning msg="Skipped 10968082 bytes, use 'verb' option for the full verbose log." subsys=datapath-loader
level=warning msg="[...]" subsys=datapath-loader
level=warning msg="7 +0)" subsys=datapath-loader
level=warning msg=" R0=map_value(id=0,off=0,ks=2,vs=8,imm=0) R1_w=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff)) R6=ctx(id=0,off=0,imm=0) R7=map_value(id=0,off=0,ks=12,vs=12,imm=0) R8=inv0 R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256_w=map_value" subsys=datapath-loader
level=warning msg="656: (7b) *(u64 *)(r10 -88) = r1" subsys=datapath-loader
level=warning msg="657: (b7) r1 = 0" subsys=datapath-loader
level=warning msg="658: (7b) *(u64 *)(r10 -56) = r1" subsys=datapath-loader
level=warning msg="659: (7b) *(u64 *)(r10 -64) = r1" subsys=datapath-loader
level=warning msg="660: (7b) *(u64 *)(r10 -48) = r1" subsys=datapath-loader
level=warning msg="661: (7b) *(u64 *)(r10 -72) = r1" subsys=datapath-loader
level=warning msg="662: (7b) *(u64 *)(r10 -80) = r1" subsys=datapath-loader
level=warning msg="663: (7b) *(u64 *)(r10 -96) = r1" subsys=datapath-loader
level=warning msg="664: (6b) *(u16 *)(r10 -56) = r1" subsys=datapath-loader
level=warning msg="665: (b7) r7 = 1" subsys=datapath-loader
level=warning msg="666: (71) r2 = *(u8 *)(r10 -100)" subsys=datapath-loader
level=warning msg="667: (15) if r2 == 0x6 goto pc+1" subsys=datapath-loader
level=warning msg=" R0=map_value(id=0,off=0,ks=2,vs=8,imm=0) R1=inv0 R2=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) R6=ctx(id=0,off=0,imm=0) R7=inv1 R8=inv0 R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-64=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="668: (b7) r7 = 0" subsys=datapath-loader
level=warning msg="669: (79) r1 = *(u64 *)(r10 -208)" subsys=datapath-loader
level=warning msg="670: (bf) r3 = r1" subsys=datapath-loader
level=warning msg="671: (67) r3 <<= 4" subsys=datapath-loader
level=warning msg="672: (57) r3 &= 32" subsys=datapath-loader
level=warning msg="673: (67) r1 <<= 3" subsys=datapath-loader
level=warning msg="674: (57) r1 &= 8" subsys=datapath-loader
level=warning msg="675: (4f) r1 |= r3" subsys=datapath-loader
level=warning msg="676: (b7) r8 = 60" subsys=datapath-loader
level=warning msg="677: (79) r3 = *(u64 *)(r10 -200)" subsys=datapath-loader
level=warning msg="678: (6b) *(u16 *)(r10 -58) = r3" subsys=datapath-loader
level=warning msg="679: (6b) *(u16 *)(r10 -60) = r1" subsys=datapath-loader
level=warning msg="680: (67) r7 <<= 1" subsys=datapath-loader
level=warning msg="681: (55) if r2 != 0x6 goto pc+9" subsys=datapath-loader
level=warning msg=" R0=map_value(id=0,off=0,ks=2,vs=8,imm=0) R1=inv0 R2=inv6 R3=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=inv60 R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="682: (bf) r2 = r7" subsys=datapath-loader
level=warning msg="683: (67) r2 <<= 3" subsys=datapath-loader
level=warning msg="684: (a7) r2 ^= 16" subsys=datapath-loader
level=warning msg="685: (57) r2 &= 248" subsys=datapath-loader
level=warning msg="686: (b7) r8 = 60" subsys=datapath-loader
level=warning msg="687: (15) if r2 == 0x0 goto pc+1" subsys=datapath-loader
level=warning msg=" R0=map_value(id=0,off=0,ks=2,vs=8,imm=0) R1=inv0 R2=inv(id=0,umax_value=248,var_off=(0x0; 0xf8)) R3=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=inv60 R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="688: (b7) r8 = 21600" subsys=datapath-loader
level=warning msg="689: (4f) r1 |= r2" subsys=datapath-loader
level=warning msg="690: (6b) *(u16 *)(r10 -60) = r1" subsys=datapath-loader
level=warning msg="691: (85) call bpf_ktime_get_ns#5" subsys=datapath-loader
level=warning msg="692: (37) r0 /= 1000000000" subsys=datapath-loader
level=warning msg="693: (0f) r8 += r0" subsys=datapath-loader
level=warning msg="694: (63) *(u32 *)(r10 -64) = r8" subsys=datapath-loader
level=warning msg="695: (71) r3 = *(u8 *)(r10 -54)" subsys=datapath-loader
level=warning msg="696: (61) r2 = *(u32 *)(r10 -48)" subsys=datapath-loader
level=warning msg="697: (bf) r1 = r3" subsys=datapath-loader
level=warning msg="698: (4f) r1 |= r7" subsys=datapath-loader
level=warning msg="699: (bf) r4 = r1" subsys=datapath-loader
level=warning msg="700: (57) r4 &= 255" subsys=datapath-loader
level=warning msg="701: (5d) if r3 != r4 goto pc+7" subsys=datapath-loader
level=warning msg=" R0=inv(id=0) R1=inv0 R2=inv0 R3=inv0 R4=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=inv(id=0) R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="702: (07) r2 += 5" subsys=datapath-loader
level=warning msg="703: (bf) r3 = r0" subsys=datapath-loader
level=warning msg="704: (67) r3 <<= 32" subsys=datapath-loader
level=warning msg="705: (77) r3 >>= 32" subsys=datapath-loader
level=warning msg="706: (67) r2 <<= 32" subsys=datapath-loader
level=warning msg="707: (77) r2 >>= 32" subsys=datapath-loader
level=warning msg="708: (3d) if r2 >= r3 goto pc+2" subsys=datapath-loader
level=warning msg=" R0=inv(id=0) R1=inv0 R2=inv5 R3=inv(id=0,umin_value=6,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R4=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=inv(id=0) R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-48=0 fp-56=0 fp-72=0 fp-80=0 fp-96=0 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="709: (73) *(u8 *)(r10 -54) = r1" subsys=datapath-loader
level=warning msg="710: (63) *(u32 *)(r10 -48) = r0" subsys=datapath-loader
level=warning msg="711: (b7) r7 = 0" subsys=datapath-loader
level=warning msg="712: (63) *(u32 *)(r10 -52) = r7" subsys=datapath-loader
level=warning msg="713: (bf) r2 = r10" subsys=datapath-loader
level=warning msg="714: (07) r2 += -112" subsys=datapath-loader
level=warning msg="715: (bf) r3 = r10" subsys=datapath-loader
level=warning msg="716: (07) r3 += -96" subsys=datapath-loader
level=warning msg="717: (79) r8 = *(u64 *)(r10 -224)" subsys=datapath-loader
level=warning msg="718: (bf) r1 = r8" subsys=datapath-loader
level=warning msg="719: (b7) r4 = 0" subsys=datapath-loader
level=warning msg="720: (85) call bpf_map_update_elem#2" subsys=datapath-loader
level=warning msg="721: (67) r0 <<= 32" subsys=datapath-loader
level=warning msg="722: (c7) r0 s>>= 32" subsys=datapath-loader
level=warning msg="723: (65) if r0 s> 0xffffffff goto pc+755" subsys=datapath-loader
level=warning msg=" R0=inv(id=0,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) R6=ctx(id=0,off=0,imm=0) R7=inv0 R8=map_ptr(id=0,off=0,ks=14,vs=56) R9=map_value(id=0,off=0,ks=12,vs=12,imm=0) R10=fp0,call_-1 fp-144=0 fp-152=0 fp-160=0 fp-208=0 fp-216=0 fp-224=map_ptr fp-248=map_value fp-256=map_value" subsys=datapath-loader
level=warning msg="724: (79) r1 = *(u64 *)(r10 -216)" subsys=datapath-loader
level=warning msg="725: (73) *(u8 *)(r10 -99) = r1" subsys=datapath-loader

Kubernetes version

 ~/Documents/hubble/install/kubernetes   master ●  kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T16:54:35Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

The document uses helm template, I believe helm3 template backwards compatible helm2.

 ~/Documents/hubble/install/kubernetes   master ●  helm version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}

Linux kernel version

 ~/Documents/hubble/install/kubernetes   master ●  kubectl get nodes -owide
NAME      STATUS   ROLES               AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                               KERNEL-VERSION                CONTAINER-RUNTIME
master    Ready    controlplane,etcd   39h   v1.15.5   10.16.18.171   <none>        Red Hat Enterprise Linux 8.0 (Ootpa)   4.18.0-80.11.2.el8_0.x86_64   docker://19.3.5
worker1   Ready    worker              39h   v1.15.5   10.16.18.172   <none>        Red Hat Enterprise Linux 8.0 (Ootpa)   4.18.0-80.11.2.el8_0.x86_64   docker://19.3.5
worker2   Ready    worker              39h   v1.15.5   10.16.18.173   <none>        Red Hat Enterprise Linux 8.0 (Ootpa)   4.18.0-80.11.2.el8_0.x86_64   docker://19.3.5
worker3   Ready    worker              39h   v1.15.5   10.16.18.174   <none>        Red Hat Enterprise Linux 8.0 (Ootpa)   4.18.0-80.11.2.el8_0.x86_64   docker://19.3.5

Flows are not being displayed

Hi,

Flows are not being displayed under service maps.

Screen Shot 2019-12-18 at 9 15 51 AM

Environment:

Fedora 31
Kubernetes 1.15.2 (on-prem)
Hubble: latest
Cilium: latest

Thanks

container/ring: data race detected on ring.full

I compiled hubble (at ref c7f6953) with the -race flag and ran into a data race while running hubble serve --max-flows 255:

WARNING: DATA RACE
Read at 0x00c0006801b8 by goroutine 80:
  github.com/cilium/hubble/pkg/server.newRingReader()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/container/ring.go:83 +0x3ff
  github.com/cilium/hubble/pkg/server.getFlows()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/local_observer.go:185 +0x170
  github.com/cilium/hubble/pkg/server.(*LocalObserverServer).GetFlows()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/local_observer.go:162 +0x66
  github.com/cilium/hubble/api/v1/observer._Observer_GetFlows_Handler()
      /home/vagrant/go/src/github.com/cilium/hubble/api/v1/observer/observer.pb.go:714 +0x135
  google.golang.org/grpc.(*Server).processStreamingRPC()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/google.golang.org/grpc/server.go:1199 +0x1521
  google.golang.org/grpc.(*Server).handleStream()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/google.golang.org/grpc/server.go:1279 +0x12d7
  google.golang.org/grpc.(*Server).serveStreams.func1.1()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/google.golang.org/grpc/server.go:710 +0xc8

Previous write at 0x00c0006801b8 by goroutine 41:
  github.com/cilium/hubble/pkg/container.(*Ring).Write()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/container/ring.go:99 +0x145
  github.com/cilium/hubble/pkg/server.(*LocalObserverServer).Start()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/local_observer.go:109 +0x567

Goroutine 80 (running) created at:
  google.golang.org/grpc.(*Server).serveStreams.func1()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/google.golang.org/grpc/server.go:708 +0xb8
  google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/google.golang.org/grpc/internal/transport/http2_server.go:432 +0x1679
  google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/google.golang.org/grpc/internal/transport/http2_server.go:473 +0x3d7
  google.golang.org/grpc.(*Server).serveStreams()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/google.golang.org/grpc/server.go:706 +0x19a
  google.golang.org/grpc.(*Server).handleRawConn.func1()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/google.golang.org/grpc/server.go:668 +0x4c

Goroutine 41 (running) created at:
  github.com/cilium/hubble/pkg/server.(*ObserverServer).Start()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/observer.go:77 +0x92
  github.com/cilium/hubble/cmd/serve.New.func1()
      /home/vagrant/go/src/github.com/cilium/hubble/cmd/serve/serve.go:103 +0x9a0
  github.com/spf13/cobra.(*Command).execute()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/github.com/spf13/cobra/command.go:830 +0x8e0
  github.com/spf13/cobra.(*Command).ExecuteC()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/github.com/spf13/cobra/command.go:914 +0x41a
  github.com/cilium/hubble/cmd.Execute()
      /home/vagrant/go/src/github.com/cilium/hubble/vendor/github.com/spf13/cobra/command.go:864 +0x8b
  main.main()
      /home/vagrant/go/src/github.com/cilium/hubble/main.go:22 +0x2f

I didn't take time to investigate yet but opened this issue to keep track of it as at first glances it seems to be a different one that the one in #82.

Starting in Rancher, Can't start Cilium-Agent

I use the instructions listed in the Installation.md trying to start Hubble (but get stuck on Cilium), and the Cilium deployment gives the following error in the container logs:

level=error msg="Error while initializing daemon" error="exit status 2" subsys=daemon
level=fatal msg="Error while creating daemon" error="exit status 2" subsys=daemon

There is a warning message in the logs just before that:

level=error msg="Command execution failed" cmd="[/var/lib/cilium/bpf/init.sh /var/lib/cilium/bpf /var/run/cilium/state 10.42.0.113 <nil> vxlan    1500 false false  false false /var/run/cilium/cgroupv2 /run/cilium/bpffs ]" error="exit status 2" subsys=datapath-loader
level=warning msg="+ set -o pipefail" subsys=datapath-loader
level=warning msg="++ command -v cilium" subsys=datapath-loader
level=warning msg="+ [[ ! -n /usr/bin/cilium ]]" subsys=datapath-loader
level=warning msg="+ rm /var/run/cilium/state/encap.state" subsys=datapath-loader
level=warning msg="+ true" subsys=datapath-loader
... [snipped for brevity]

This is while running a Rancher 2.3.1 + Kubernetes 1.15.6 cluster with a single master. I ran all the commands (with kubectl) outside of Rancher as if it were a normal k8s cluster.

The Rancher UI shows this message:
CrashLoopBackOff: Back-off 5m0s restarting failed container=cilium-agent pod=cilium-hjw65_kube-system(e721fdfd-966f-49a7-b996-c3f3e84f275c)

Note: I'm somewhat new to Kubernetes in general, so if I'm missing something I apologize in advance.

Install Hubble from installation guide failing

Hi,
when trying to follow the instructions that appears in this site:
https://github.com/cilium/hubble/blob/master/Documentation/installation.md
once you reach to hubble and try to run this cmd:

helm template hubble \
    --namespace kube-system \
    --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
    > hubble.yaml

you will fail on :

Error: rendering template failed: runtime error: invalid memory address or nil pointer dereference

tried to install also without any metrics and also not working , it looks like the template that exist here not working .
can you please update the guidelines if any thing is expected?

Error: 14 UNAVAILABLE: failed to connect to all addresses

After deploying cilium and hubble onto a GKE setup, I get the following error in the hubble ui (for the servicemap):

Failed to discover cluster: Error: from hubble: Error: 14 UNAVAILABLE: failed to connect to all addresses

The logs for the hubble-ui pod:

{"name":"frontend","hostname":"hubble-ui-548f6dfc9-czss8","pid":18,"req_id":"88ea14bc-662c-439e-9bb1-dc29c3d55575","user":"admin@localhost","level":50,"err":{"message":"from hubble: Error: 14 UNAVAILABLE: failed to connect to all addresses","locations":[{"line":10,"column":7}],"path":["viewer","discoverCluster"],"extensions":{"code":"INTERNAL_SERVER_ERROR"}},"msg":"","time":"2019-11-21T13:53:52.745Z","v":0}

Hubble pods themselves:

{"level":"info","ts":1574338921.8148355,"caller":"cmd/server.go:166","msg":"Started server with args","max-flows":131071,"duration":0}
{"level":"info","ts":1574338921.8151586,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"flow","status":""}
{"level":"info","ts":1574338921.8152533,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"port-distribution","status":""}
{"level":"info","ts":1574338921.815333,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"icmp","status":""}
{"level":"info","ts":1574338921.8154128,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"http","status":""}
{"level":"info","ts":1574338921.8155031,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"dns","status":""}
{"level":"info","ts":1574338921.8155766,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"drop","status":""}
{"level":"info","ts":1574338921.8156652,"caller":"api/registry.go:73","msg":"Configured metrics plugin","name":"tcp","status":""}
{"level":"info","ts":1574338921.8190467,"caller":"cmd/server.go:286","msg":"Starting gRPC server on client-listener","client-listener":"unix:///var/run/hubble.sock"}
{"level":"warn","ts":1574338921.8237007,"caller":"server/ipcache.go:115","msg":"Failed to obtain IPCache from Cilium. If you are using Cilium 1.6 or older, this is expected. Pod names of endpoints running on remote nodes will not be resolved."}

Note that metrics in Grafana are working perfectly.

Also note that I am using kubectl port-forward to connect to the UI (this is not a minikube setup) and so I do not have access to the internal services - is that a requirement?

No version numbers in Cilium Hubble container builds

We're looking at integrating Cilium Hubble into our Kubernetes cluster deployments, but we're prevented from using it as a troubleshooting tool because Cilium Hubble doesn't yet have any versioned containers to pull other than the tag latest

As a general policy, we don't allow using the latest tag in our manifests to prevent unexpected issues from cropping up. For example pinning to a container version:

  • Prevents running different commits of a service within the same cluster
  • Prevents us from running different commits of a service across clusters (10+ and growing!)

Is Hubble near a release tag being chosen, even if it's not even close to ready to be a full release? (v0.0.0001? 🤣 )

observe --follow terminates when buffer is empty

this might be related to #131, but i'm seeing a slightly different issue where observe --follow terminates when ring buffer is empty. i looked into it a bit, basically what happens is:

Policy-free L7 visibility

Summary

Simple way of enabling FQDN visibility via a Cilium flag instead of requiring complicated annotations. DNS is standardized in Kubernetes so it is simple to automatically detect all DNS traffic and provide visibility.

apply filter before applying `--last`

The current logic gets the --last number of elements from the ring buffer, then applies the filter which usually results in less than the requested amount of flows or even none.

container (ring): data race while running tests

Hit a data race while running the tests with the -race flag:

$ go test -timeout=30s -cover -mod=vendor -race $(go list ./...)
(...)
==================
WARNING: DATA RACE
Read at 0x00c000350808 by goroutine 92:
  github.com/cilium/hubble/pkg/container.(*Ring).readFrom.func1()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/container/ring.go:180 +0x1c0

Previous write at 0x00c000350808 by goroutine 99:
  github.com/cilium/hubble/pkg/container.(*Ring).Write()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/container/ring.go:102 +0xd8
  github.com/cilium/hubble/pkg/server.(*LocalObserverServer).Start()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/local_observer.go:109 +0x6ad

Goroutine 92 (running) created at:
  github.com/cilium/hubble/pkg/container.(*Ring).readFrom()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/container/ring.go:163 +0xbc
  github.com/cilium/hubble/pkg/container.(*RingReader).Next()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/container/ring_reader.go:57 +0x281
  github.com/cilium/hubble/pkg/server.(*flowsReader).Next()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/local_observer.go:306 +0x16e
  github.com/cilium/hubble/pkg/server.getFlows()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/local_observer.go:199 +0x3ee
  github.com/cilium/hubble/pkg/server.(*LocalObserverServer).GetFlows()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/local_observer.go:162 +0x7e
  github.com/cilium/hubble/pkg/server.TestObserverServer_GetLastNFlows_With_Follow.func3()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/observer_test.go:249 +0x8f

Goroutine 99 (running) created at:
  github.com/cilium/hubble/pkg/server.(*ObserverServer).Start()
      /home/vagrant/go/src/github.com/cilium/hubble/pkg/server/observer.go:122 +0xf8
==================
--- FAIL: TestObserverServer_GetLastNFlows_With_Follow (0.01s)
    testing.go:853: race detected during execution of test
FAIL
coverage: 60.9% of statements
FAIL	github.com/cilium/hubble/pkg/server	0.391s

Numerical identity for endpoints not always populated

Some Cilium trace notifications (e.g. from-endpoint flows) do not contain the numerical security identity of the source/target (in the case of from-endpoint, this is more or less by design, since it is intended to be an early tracepoint that happens before any lookups/modifications).

For source/targets that are not a local endpoint, this is not a problem as the identities are populated when we perform the IPCache lookup. For local endpoints however, we fail to perform a lookup to obtain this information, since our EndpointGetter does not have field for the security identity.

We should extend our new EndpointInfo interface to expose the numerical security identity of the endpoint, so we can populate the field in the parser.

Flows don't show up on GKE

Flows and arrows are not visible in Hubble UI. Yet flows for "hubble" namespace are visible. Running in GKE.

Running procedure:

helm template cilium \
  --namespace cilium \
  --set global.nodeinit.enabled=true \
  --set nodeinit.reconfigureKubelet=true \
  --set nodeinit.removeCbrBridge=true \
  --set global.cni.binPath=/home/kubernetes/bin \
  --set global.tag=v1.7.0-rc1 \
  > cilium.yaml

helm template hubble \
    --namespace hubble \
    --set metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}" \
    --set ui.enabled=true \
    > hubble.yaml

I can confirm that flows are visible in "cilium monitor", "hubble observe", and "kubectl get cep".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.