Giter VIP home page Giter VIP logo

tracer's Introduction

Tracer

The kernel tracer that attaches eBPF probes to containers for capturing TLS traffic.

See Makefile for building and running the program.

tracer's People

Contributors

alongir avatar bserdar avatar corest avatar iluxa avatar mertyildiran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

tracer's Issues

Improve CPU, Memory consumption in Tracer

At this point, Tracer consumes too much memory and CPU in a busy cluster.
We need to find a way to throttle the resource consumption.
One option is to use K8s resource throttling.

  • @iluxa to test and propose the right limits
  • @alongir to provide a few examples of high CPU and mem consumption by the Tracer [Please do not wait for this]
  • @alongir to review this PR

Example in a very little cluster:

Image

TLS payload goes into tls.pcap only appears after significant delay

After running many tests, I come to the following conclusions:

  1. Tracer behavior isn't predictable
  2. It does show (sometimes) payload related to openSSL and crypto/tls.
  3. Often, payload appears in pcap.tls after a very long time.
  4. Often payload is completely ignored for a long time and then appears
  5. I can't be sure all traffic intercepted traffic appears there.

You can use this command for testing:

k apply -f https://raw.githubusercontent.com/kubeshark/sock-shop-demo/master/deploy/kubernetes/tls-demo.yaml

Hint: you can check the containers logs, to see the TLS payloads:

Image
Image

Add TLS Metrics to Worker metrics

Following up on the introduction of new Worker metrics feature, let's add TLS metrics, including the amount of traffic intercepted from the various sources (e.g. libSSL). I'd create a metric per source when possible.

@iluxa - As @corest wrote this feature, he can be of help in implementing this feature.

Ensure that the Tracer functions well post recent changes related to pod targeting

Recent pod targeting related changed may have impact the Tracer functionality. Can you please go over and ensure:

  1. In case of AF-PACKET use, the tracer adheres to the pod targeting rules (including the BPF override).
  2. In case of eBPF, functionality is supported.

If you recall there were changes related to:

  • BPF expression
  • BPF override
  • IP extraction in case of multiple IFCs.

Cgroups V2 support

The following was reporteda sa bug.
We need to ensure we support cgroups v2 on premise with RHEL 9

Image

Replace tls.pcap with a unix socket

Context:
As the tls.pcap mechanism is fragile and therefore unstable, we decided to replace this mechanism with a unix socket mechanism

Requested:

  1. Replace mechanism
  2. [Future] Write tests to ensure proper operation (no message is lost)
  3. [Future] Add to the CI/CD process.

Optimize pod scanning on K8s events

Current:

  • At this point the Tracer scans all pods on every K8s event.
  • Scanning is very costly

Requested:

  • Optimize this process and potentially reduce this operation to when it is required

Update libbpf to the latest

Tracer uses pretty old libbpf library: v0.3.0 it is more than 3 years old, latest is v1.3.0

This update doesn't bring immediate new value into tracer, but new libbpf features can be considered to bugfix, improve and new development

Support SSH protocol

Context:

  • In K8s, a common use case includes SSH-ing into a node or a pod.
  • Using SSH protocol, the OS can be instructed to run commands

Requested:

  • Research the possibly to intercept SSH protocol
  • Implement the SSH protocol support

TBD

  • Decide if this is a Tracer or a Worker feature (e.g. dos this require eBPF)

Write tests ensuring functionality and compatability

Current:

  • We have partial support for TLS libraries and programs
  • We don't know if we can support a customer environment, unless we run in it and see

Requested:

  • Build and maintain a set of tests ensuring compatibility that correlates to the documentation
  • Tests we can run every time there's a new release to ensure compatibility

eBPF TLS Detection & Report

Use the Tracer program, running in Host mode to inspect the running processes and detect TLS libraries and programs.
Print out a report.

Detect:

  1. Envoy with/without mTLS
  2. Linkerd versions
  3. OpenSSL versions
  4. BoringSSL versions
  5. Golang programs

Intercept traffic in host mode and save to PCAP

For the purpose of verifying proper TLS interception, Tracer should be able to run in Host mode and intercept traffic. Intercepted traffic should be stored in a PCAP file.
We need to keep in mind the purpose of this feature:

  1. Troubleshoot TLS interception problems in a K8s environment where Kubeshark is running.
  2. Setting expectations of what can be intercepted and what not in a K8s env running Kubeshark.
    Anything else is a nice to have and not mandatory.

golang k6.io/k6/lib/netext.Conn

Alon, I found root cause why k6 is not intercepted by the tracer
It is not stripped golang application, so it is targeting in logs correctly
however it uses golang <http://k6.io/k6/lib/netext.Conn|k6.io/k6/lib/netext.Conn> as underlying object, tracer expects only net.TCPConn
Looking into the ways how to fix the problem, let me know if github issue need to be created for that

Slack Message

Is this a true warning?

Should it be there?

�[90m2024-02-12T23:47:55Z�[0m �[31mWRN�[0m �[1mtracer/tracer.go:126�[0m�[36m >�[0m PID skipped no libssl.so found: �[36merror=�[0m�[31m"libssl.so not found for PID 3587"�[0m �[36mpid=�[0m3587

Image

TLS packet drop

Can we take a look at the TLS packet drop and see if in fact we drop packets? If we do, let's solve it. If we don't can we true up the counters?

Image

Linkerd MTLS support

Current:
When Linkerd is installed we have no visibility to mTLS traffic in the same way we do with Envoy.

Requested:

  • Run Kubeshark with Linkerd
  • See if we can intercept TLS traffic
  • Come up with a solution
  • Implement the solution

Suspected Worker<>Tracer Broken Integration

When running a simple experiment:

  1. Use the following containers:

Using the following manifests:

  • deploy/kubernetes/manifests/00-sock-shop-ns.yaml
  • deploy/kubernetes/manifests/extras/46-mizutest-outbound-tls-openssl-dep.yaml
  • deploy/kubernetes/manifests/extras/47-mizutest-outbound-tls-openssl-svc.yaml
  • deploy/kubernetes/manifests/extras/52-mizutest-outbound-tls-golang-dep.yaml
  • deploy/kubernetes/manifests/extras/53-mizutest-outbound-tls-golang-svc.yaml
  1. Ensure the tracer generates the right information from both images
  2. Ensure the Worker receives and process the information from the Tracer.

Improve Tracer performance, reduce CPU consumption at the scanning stage

Current:

  • When Tracer starts running it consumes a significant amount of CPU, probably related to the number of processes running on the node as it inspects all processes to find TLS elements (e.g. libraries and GOlang programs)
  • The high consumption continues until all processes are inspected.
  • This elevated consumption can cause OOMKilled for the Worker pod.
  • On OOMKIlled this process begins again.
  • While this happens on busy nodes and can be resolved with more resources, we can likely improve this behavior

Requested:

  • Throttle the scanning process
  • Enable/disable detection of TLS elements (e.g OpenSSL, Golang programs, etc) - this is optional and great for debugging. Yet if you solve the throttling, this feature gains a secondary priority.

no BTF found for kernel version

How do I handle this error?
Couldn't initialize the tracer: error="field GoCryptoTlsAbi0Read: program go_crypto_tls_abi0_read: apply CO-RE relocations: load kernel spec: no BTF found for kernel version 5.4.0-89-generic: not supported"

(Too) many of the TLS packets are declared as errors

Many TLS message appear in the dashboard as errors.

FYI: We have issues with the Tracer that we're in the process of resolving

@iluxa this can be a problem in the tracer when writing to the tls.pcap file.

Image

Image

This is a normal (no error) TLS payload:

Image

Improve CPU, memory and overall resource consumption in Tracer

At this point, Tracer consumes too much memory and CPU in a busy cluster.
We need to find a way to throttle the resource consumption.
One option is to use K8s resource throttling.

  • @iluxa to test and propose the right limits
  • @alongir to provide a few examples of high CPU and mem consumption by the Tracer [Please do not wait for this]
  • @corest to implement this PR

Error: Unable to get go user-kernel context

│ 2024-02-12T17:40:00Z ERR tracer/bpf_logger.go:107 > [bpf] [18326625836259] Unable to get go user-kernel context [fd: 18]]

Is this a real error? If yes, can we solve it? If not can we reduce its severity?

-cbuf doesn't show payload

While I see traffic on the dashboard, this is what I see when I use cbuf:

/app # cat data/ip-192-168-19-177.us-west-1.compute.internal/tls_last.pcap 
?ò?/app # 

github.com/kubeshark/api v1.1.15 is not found

The go.mod file of the tracer package defines a dependency on "github.com/kubeshark/api v1.1.15", but this API library is not publicly available, so the compilation will fail.

tls_process_discoverer.go:7:2: reading github.com/kubeshark/api/go.mod at revision v1.1.15: git ls-remote -q origin in /home/dc/go/pkg/mod/cache/vcs/5da0bad9395502de92b7a446443350e8d892c25d92cfcedf0f471a61010b01bd: exit status 128:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
Confirm the import path was entered correctly.
If this is a private repository, see https://golang.org/doc/faq#git_https for additional information.
tracer46_bpfel_x86.go:371:12: pattern tracer46_bpfel_x86.o: no matching files found
make: *** [Makefile:27: build] Error 1

Tracer incompatible with `kind`

Regarding this https://github.com/kubeshark/kubeshark/issues/1493|https://github.com/kubeshark/kubeshark/issues/1493

  1. pf_ring is not supported yet for new kernels used in kind (btw all the issue is about kind, not minikube)
  2. when I try to run kubeshark on mac m1 kind (which is actually running in linux qemu vm), tracer fails with
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 303, [0::/../../../../../system.slice/containerd.service ]" pid=303
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 310, [0::/../../kubelet-kubepods-burstable-pod8cbf9887da25b380bf1858a4d3b399d8.slice/cri-containerd-195a91a1b8102bc1b937235cabf043c8fb6b2b2bd9b77d8dd51c8cc89831a8ea.scope ]" pid=310
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 311, [0::/../../../../../system.slice/containerd.service ]" pid=311
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 326, [0::/../../kubelet-kubepods-burstable-pod584bd4e38ab0df294af27207ce1a29b1.slice/cri-containerd-123b744f4d48f7464a6cb227b16c11849d0f01e4c744adeeed751abc0389df10.scope ]" pid=326
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 367, [0::/../../kubelet-kubepods-burstable-pode44a910c33eb66c4d3b4b617ed29c23e.slice/cri-containerd-a87815ecec535258596da80d6a49016fd00f498f43df2659b4870591a995530e.scope ]" pid=367
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 379, [0::/../../kubelet-kubepods-burstable-pode98a496a5002e4d3842f58e2bb4420dd.slice/cri-containerd-d45d800cc899cf374a9522a225a7190386660c24810b1d9f97575757faa6dfcc.scope ]" pid=379
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 497, [0::/../../kubelet-kubepods-burstable-pod8cbf9887da25b380bf1858a4d3b399d8.slice/cri-containerd-ada2550592573f8a9296d1fb5605b3c35232713bc00319695a6a0d8aa2c08c29.scope ]" pid=497
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 507, [0::/../../kubelet-kubepods-burstable-pode98a496a5002e4d3842f58e2bb4420dd.slice/cri-containerd-9f2ef536b7a4a924eea35993e54aa7f6566c67d788269dae40f1c6485992383d.scope ]" pid=507
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 515, [0::/../../kubelet-kubepods-burstable-pode44a910c33eb66c4d3b4b617ed29c23e.slice/cri-containerd-0b36aec7be58dc122954c9ce73adbe86f40e4ae26b2821fee79c4f0b260782c4.scope ]" pid=515
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 603, [0::/../../kubelet-kubepods-burstable-pod584bd4e38ab0df294af27207ce1a29b1.slice/cri-containerd-07448cee64114b817548f7ba79d70be3010a2c954fde456ddbfa3bed8d5bf705.scope ]" pid=603
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 655, [0::/../../../../kubelet.service ]" pid=655
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 723, [0::/../../../../../system.slice/containerd.service ]" pid=723
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 751, [0::/../../../../../system.slice/containerd.service ]" pid=751
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 767, [0::/../../../kubelet-kubepods-pod0b029d7c_9870_48ef_a6d5_1e6959bfb310.slice/cri-containerd-2990b76b9e90ee6c9e7fc01d641fdcd860d8fce18ead116d0c00e41494d73b17.scope ]" pid=767
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 777, [0::/../../../kubelet-kubepods-besteffort.slice/kubelet-kubepods-besteffort-podbb419ebd_484c_4547_9e99_e87fb73d299c.slice/cri-containerd-10120d72758aecb70439b9cefc1d56576dbd69992cc2f526882b27292ffe1719.scope ]" pid=777
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 822, [0::/../../../kubelet-kubepods-besteffort.slice/kubelet-kubepods-besteffort-podbb419ebd_484c_4547_9e99_e87fb73d299c.slice/cri-containerd-d8c07db59ccd2f3a6fa45c9b3cc6ad16f4f8e817c0a9a026e739a2ec04f227b6.scope ]" pid=822
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 86, [0::/../../../../../system.slice/systemd-journald.service ]" pid=86
2024-02-15T14:35:34Z WRN tracer/tls_process_discoverer.go:69 > Couldn't get the cgroup of process. error="Cgroup path not found for 992, [0::/../../../kubelet-kubepods-pod0b029d7c_9870_48ef_a6d5_1e6959bfb310.slice/cri-containerd-66bcbf0f81d2a239216c661540fd24403a9b987ee7071da1fca2f30c9b85e17c.scope ]" pid=992
2024-02-15T14:35:34Z INF tracer/tls_process_discoverer.go:30 > pids=[]
2024-02-15T14:35:34Z FTL tracer/main.go:68 > Couldn't initialize the tracer: error="field GoCryptoTlsAbi0Read: program go_crypto_tls_abi0_read: apply CO-RE relocations: can't read types: type id 4132: unknown kind: Unknown (19)"```

[Slack Message](https://kubeshark.slack.com/archives/D065L40JHPE/p1708007882235239)

Is this an error?

If this is an error, should we fix it? If it's not can we supress it?

�[90m2024-02-12T23:47:55Z�[0m �[1m�[31mERR�[0m�[0m �[1mtracer/tracer.go:259�[0m�[36m >�[0m �[36mstack=�[0m"*fmt.wrapError prog cannot be nil: invalid input\n/app/tracer/ssllib_hooks.go:39 (0x1481c66)\n/app/tracer/ssllib_hooks.go:30 (0x1481b8c)\n/app/tracer/tracer.go:209 (0x1487952)\n/app/tracer/tracer.go:132 (0x1486e6e)\n/app/tracer/tls_process_discoverer.go:36 (0x1484a35)\n/app/tracer/pkg/kubernetes/target.go:130 (0x13ed8e8)\n/app/tracer/pkg/kubernetes/watcher.go:62 (0x13ee037)\n/app/tracer/pkg/kubernetes/watcher.go:77 (0x13ee20b)\n/usr/local/go/src/runtime/asm_amd64.s:1598 (0x46fc81)\n"

Image

Failed marshalling when handling captured traffic

When I tried v52.1.7-dev2, I sometimes see crypto/tls traffic and not open ssl. And I see the following logs in the worker, which makes me think there's a problem with parsing. These logs appear only with TLS traffic.
Also considering this: #30, it was challaning to debug, but this might be important.
I did validate that the socket worked on the tracer and worker.

2024-01-25T04:37:17Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:17Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:21Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:21Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:21Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:22Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:25Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:27Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:27Z INF server/middlewares/logger.go:69 > body_size=17 client_id=192.168.50.165 latency="26.528µs" method=GET path=/pcaps/total-size status_code=200                                     │
│ 2024-01-25T04:37:30Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:30Z INF server/middlewares/logger.go:69 > body_size=17 client_id=192.168.50.165 latency="17.398µs" method=GET path=/pcaps/total-size status_code=200                                     │
│ 2024-01-25T04:37:31Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:31Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:31Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:33Z INF server/middlewares/logger.go:69 > body_size=17 client_id=192.168.50.165 latency="26.152µs" method=GET path=/pcaps/total-size status_code=200                                     │
│ 2024-01-25T04:37:35Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:35Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:35Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:36Z INF server/middlewares/logger.go:69 > body_size=17 client_id=192.168.50.165 latency="22.976µs" method=GET path=/pcaps/total-size status_code=200                                     │
│ 2024-01-25T04:37:39Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:39Z INF server/middlewares/logger.go:69 > body_size=17 client_id=192.168.50.165 latency="22.534µs" method=GET path=/pcaps/total-size status_code=200                                     │
│ 2024-01-25T04:37:39Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:40Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:41Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:41Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:43Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:44Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:44Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:46Z INF server/middlewares/logger.go:69 > body_size=17 client_id=192.168.50.165 latency="17.879µs" method=GET path=/pcaps/total-size status_code=200                                     │
│ 2024-01-25T04:37:48Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:48Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:49Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:51Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:51Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:52Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:53Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:54Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:54Z INF server/middlewares/logger.go:69 > body_size=17 client_id=192.168.50.165 latency="25.976µs" method=GET path=/pcaps/total-size status_code=200                                     │
│ 2024-01-25T04:37:57Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting request to HAR"                                       │
│ 2024-01-25T04:37:57Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:37:58Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:38:01Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:38:01Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:38:02Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR"                                      │
│ 2024-01-25T04:38:02Z ERR main.go:173 > Failed marshalling item: error="json: error calling MarshalJSON for type http.HTTPPayload: Failed converting response to HAR" 

Including an uprobe SSL_get_error

Hello!

Thanks so much for your team's contributions here. There have been some great learnings in this project as I dive more into ebpfs!

I'm curious how this project been aggregating response chunks that have been sniffed from SSL_read/write uprobes.

I've taken a look at https://www.openssl.org/docs/manmaster/man7/ossl-guide-tls-client-block.html and I read that to verify that a response has completed, a 0 value gets returned from the SSL_read_ex function and then a followup call is made to SSL_get_error to verify a response has completed successfully.

I noticed that a 0 value returned by SSL_read_ex does not get handled and is returned early: https://github.com/kubeshark/tracer/blob/master/bpf/openssl_uprobes.c#L104

When aggregating responses to serve in the UI, I'm curious how you are able to tell when a response has completed.

Improve traffic comminication between tracer and worker

When tracer transfers packet from perf buffer to worker, it copies packet to unix socket (or named pipe), the worker on it's side copies packet from unix socket/named pipe to it's own buffer.

This communication can be improved by avoiding mediator, and worker can receive packet directly from perf buffer.

This improvement can bring additional benefits, like filtering - to receive only traffic passed filters.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.