Giter VIP home page Giter VIP logo

eunomia-bpf / bpftime Goto Github PK

View Code? Open in Web Editor NEW
760.0 18.0 73.0 15.69 MB

Userspace eBPF runtime for Observability, Network & General Extensions Framework

Home Page: https://eunomia.dev/bpftime/

License: MIT License

CMake 5.35% Dockerfile 0.03% Makefile 8.76% C 30.12% C++ 54.94% Python 0.56% Shell 0.18% HTML 0.01% Go 0.06%
ebpf runtime syscall-tracing uprobes userspace jit llvm instrumentation

bpftime's Introduction

logo

eunomia-bpf: simplify and enhance eBPF with CO-RE1 and WebAssembly2

Actions Status GitHub release (latest by date) codecov DeepSource CodeFactor

A compiler and runtime framework to help you build and distribute eBPF program easier.

Introduction

eunomia-bpf is a dynamic loading library/runtime and a compile toolchain framework, aim at helping you build and distribute eBPF programs easier.

With eunnomia-bpf, you can:

  • A library to simplify writing eBPF programs:
  • Build eBPF programs with Wasm2: see Wasm-bpf project
    • Runtime, libraries and toolchains to write eBPF with Wasm in C/C++, Rust, Go...covering the use cases from tracing, networking, security.
  • simplify distributing eBPF programs:
    • A tool for push, pull and run pre-compiled eBPF programs as OCI images in Wasm module
    • Run eBPF programs from cloud or URL within 1 line of bash without recompiling, kernel version and architecture independent.
    • Dynamically load eBPF programs with JSON config file or Wasm module.

For more information, see documents/introduction.md.

Getting Started

run as cli tool or server

You can get pre-compiled eBPF programs running from the cloud to the kernel in 1 line of bash:

# download the release from https://github.com/eunomia-bpf/eunomia-bpf/releases/latest/download/ecli
$ wget https://aka.pw/bpf-ecli -O ecli && chmod +x ./ecli
$ sudo ./ecli run https://eunomia-bpf.github.io/eunomia-bpf/sigsnoop/package.json # simply run a pre-compiled ebpf code from a url
INFO [bpf_loader_lib::skeleton] Running ebpf program...
TIME     PID    TPID   SIG    RET    COMM   
01:54:49  77297 8042   0      0      node
01:54:50  77297 8042   0      0      node
01:54:50  78788 78787  17     0      which
01:54:50  78787 8084   17     0      sh
01:54:50  78790 78789  17     0      ps
01:54:50  78789 8084   17     0      sh
01:54:50  78793 78792  17     0      sed
01:54:50  78794 78792  17     0      cat
01:54:50  78795 78792  17     0      cat

$ sudo ./ecli run ghcr.io/eunomia-bpf/execve:latest # run with a name and download the latest version bpf tool from our repo
[79130] node -> /bin/sh -c which ps 
[79131] sh -> which ps 
[79132] node -> /bin/sh -c /usr/bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,c 
[79133] sh -> /usr/bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command= 
[79134] node -> /bin/sh -c "/home/yunwei/.vscode-server/bin/2ccd690cbf 
[79135] sh -> /home/yunwei/.vscode-server/bin/2ccd690cbff 78132 79119 79120 79121 
[79136] cpuUsage.sh -> sed -n s/^cpu\s//p /proc/stat

You can also use a server to manage and dynamically install eBPF programs.

Start the server:

$ sudo ./ecli-server
[2023-08-08 02:02:03.864009 +08:00] INFO [server/src/main.rs:95] Serving at 127.0.0.1:8527

Use the ecli to control the remote server and manage multiple eBPF programs:

$ ./ecli client start sigsnoop.json # start the program
1
$ ./ecli client log 1 # get the log of the program
TIME     PID    TPID   SIG    RET    COMM   
02:05:58  79725 78132  17     0      bash
02:05:59  77325 77297  0      0      node
02:05:59  77297 8042   0      0      node
02:05:59  77297 8042   0      0      node
02:05:59  79727 79726  17     0      which
02:05:59  79726 8084   17     0      sh
02:05:59  79731 79730  17     0      which

For more information, see documents/src/ecli/server.md.

Install the project

  • Install the ecli tool for running eBPF program from the cloud:

    $ wget https://aka.pw/bpf-ecli -O ecli && chmod +x ./ecli
    $ ./ecli -h
    ecli subcommands, including run, push, pull, login, logout
    
    Usage: ecli-rs [PROG] [EXTRA_ARGS]... [COMMAND]
    
    Commands:
      run     run ebpf program
      client  Client operations
      push    
      pull    pull oci image from registry
      login   login to oci registry
      logout  logout from registry
      help    Print this message or the help of the given subcommand(s)
    
    Arguments:
      [PROG]           Not preferred. Only for compatibility to older versions. Ebpf program URL or local path, set it `-` to read the program from stdin
      [EXTRA_ARGS]...  Not preferred. Only for compatibility to older versions. Extra args to the program; For wasm program, it will be passed directly to it; For JSON program, it will be passed to the generated argument parser
    
    Options:
      -h, --help  Print help
    ....
  • Install the ecc compiler-toolchain for compiling eBPF kernel code to a config file or Wasm module(clang, llvm, and libclang should be installed for compiling):

    $ wget https://github.com/eunomia-bpf/eunomia-bpf/releases/latest/download/ecc && chmod +x ./ecc
    $ ./ecc -h
    eunomia-bpf compiler
    Usage: ecc [OPTIONS] <SOURCE_PATH> [EXPORT_EVENT_HEADER]
    ....

    or use the docker image for compile:

    # for x86_64 and aarch64
    docker run -it -v `pwd`/:/src/ ghcr.io/eunomia-bpf/ecc-`uname -m`:latest # compile with docker. `pwd` should contains *.bpf.c files and *.h files.
  • build the compiler, runtime library and tools:

    see build for building details.

Examples

See examples for details about simple eBPF tools and eunomia-bpf library usage.

See github.com/eunomia-bpf/wasm-bpf/tree/main/examples for Wasm eBPF programs and examples.

We also have a prove of concept video: Writing eBPF programs in Wasm.

License

MIT LICENSE

Footnotes

  1. CO-RE: Compile Once – Run Everywhere 2

  2. WebAssembly or Wasm: https://webassembly.org/ 2

bpftime's People

Contributors

aneeshdamle11 avatar caizixian avatar dependabot[bot] avatar fr0m-scratch avatar fripside avatar hp77-creator avatar jiahuann avatar kailian-jacy avatar littlefisher619 avatar lnicola avatar nobinpegasus avatar officeyutong avatar rphang avatar sanket-0510 avatar shawnzhong avatar viveksati5143 avatar yuanrui77 avatar yunwei37 avatar zheaoli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bpftime's Issues

[BUG] make build with error

Describe the bug

make build exit with error. Need more prompting message.
To Reproduce

git clone bpftime, and just make build.
Expected behavior

Maybe some prompting information is needed if the build environment not ready.
Screenshots

$ make build
cmake -Bbuild -DBPFTIME_ENABLE_UNIT_TESTING=1
-- Enabling ubsan for Debug builds; Processor=x86_64
-- Started CMake for bpftime v0.1.0...

CMake Error at /usr/share/cmake/Modules/ExternalProject.cmake:1721 (file):
  file problem creating directory:
  /home/pdliyan/bpftime/third_party/libbpf//src
Call Stack (most recent call first):
  /usr/share/cmake/Modules/ExternalProject.cmake:3633 (_ep_set_directories)
  cmake/libbpf.cmake:6 (ExternalProject_Add)
  CMakeLists.txt:63 (include)


CMake Error at /usr/share/cmake/Modules/ExternalProject.cmake:1723 (message):
  dir '/home/pdliyan/bpftime/third_party/libbpf//src' does not exist after
  file(MAKE_DIRECTORY)
Call Stack (most recent call first):
  /usr/share/cmake/Modules/ExternalProject.cmake:3633 (_ep_set_directories)
  cmake/libbpf.cmake:6 (ExternalProject_Add)
  CMakeLists.txt:63 (include)


-- Configuring incomplete, errors occurred!
See also "/home/pdliyan/bpftime/build/CMakeFiles/CMakeOutput.log".
make: *** [Makefile:48: build] Error 1

Desktop (please complete the following information):

  • OS: Centos
  • Version 8
    CentOS Linux release 8.4.2105

Additional context

[FEATURE] Refinement of Benchmarking for System Calls

Description:
The current benchmarking approach might be providing misleading results due to the inclusion of certain functions that induce context switching. For instance, current system calls (syscalls) include print functions which overshadow the minor performance differences, as the results are mostly in the order of milliseconds (ms). However, the difference in eBPF programs for syscall tracepoints might just be a few hundred nanoseconds (ns).

Proposed Solutions:

  1. Decompose the Benchmark into Multiple Cases: The benchmark should be divided into distinct cases to evaluate the impact of each segment on performance.
    • Syscall Tracepoint Benchmark: This benchmark should only focus on syscall tracepoints. It can include a very simple arithmetic operation like (a+b) to prevent it from being optimized away. This will help in calculating the inherent overhead of syscall tracepoints.
    • Hashmaps Benchmark: This case should measure the time taken for hashmap operations. It should include a helper function to retrieve the PID and use hashmaps for statistics.
    • LLVM JIT Performance: Test the performance of the LLVM JIT with slightly complex calculations to understand its impact.

Note: If implementing the above solutions is cumbersome, consider creating an issue for future reference and revisit it later.

[BUG] Agent Segmentation fault on Ctrl C

Describe the bug

$ sudo ~/.bpftime/bpftime  start -s example/libbpf-tools/opensnoop/opensnoop
...
2185   node              21   0 /proc/24936/cmdline
2185   node              21   0 /proc/24856/cmdline
2185   node              21   0 /proc/24936/cmdline
2185   node              21   0 /proc/24856/cmdline
2185   node              21   0 /proc/24936/cmdline
2185   node              21   0 /proc/24856/cmdline
2185   node              21   0 /proc/24936/cmdline
2185   node              21   0 /proc/24856/cmdline
^CSegmentation fault

To Reproduce

run

$ sudo ~/.bpftime/bpftime  load example/libbpf-tools/opensnoop/opensnoop

and run:

$ sudo ~/.bpftime/bpftime  start -s example/libbpf-tools/opensnoop/opensnoop

Expected behavior

Screenshots

Desktop (please complete the following information):

  • OS: [e.g. Windows]
  • Version [e.g. 10]

Additional context

Poor performance using bpftime+bpftrace with spdk

Hi,

I attended the bpftime talk at LPC last week. It was a very good talk and I was excited to try it with https://github.com/spdk/spdk - a project where I am a core maintainer.

SPDK is a high-performance user space storage application framework. It has its own tracing capability. We have tried to use bpftrace with SPDK, but the overhead of the context switches is too high to instrument high-performance IO paths. So we primarily use bpftrace for control paths, and rely on the native SPDK tracing for instrumenting high-performance IO paths.

So bpftime is something we are very interested in. Unfortunately our initial testing (seen both by me and @ksztyber) is that bpftime + bpftrace actually performed worse than bpftrace by itself. We are hoping the bpftime community could help explain what we are seeing, including anything we may have done wrong with our configuration.

Reproduction steps:

First, clone SPDK:

git clone https://github.com/spdk/spdk
git submodule update --init

Install dependencies:

cd spdk
scripts/pkgdep.sh

Build SPDK:

./configure --with-usdt
make -j

Paste the following into a file named null.json. This is the configuration for a null block device which will be used for the test:

{
  "subsystems": [
    {
      "subsystem": "bdev",
      "config": [
        {
          "method": "bdev_null_create",
          "params": {
            "name": "null0",
            "num_blocks": 2048000,
            "block_size": 512,
            "physical_block_size": 512,
            "md_size": 0,
            "dif_type": 0,
            "dif_is_head_of_md": false,
            "uuid": "38c40683-88fe-446b-8db0-6b815396b2e7"
          }
        },
        {
          "method": "bdev_wait_for_examine"
        }
      ]
    }
  ]
}

Try running the SPDK bdevperf test without any probes. The test will run for 5 seconds, and then should show somewhere between 10M and 20M IOPs depending on your platform (it is 13.9M IO/s on my system):

build/examples/bdevperf -w randread -o 4096 -q 16 -t 5 -c null.json

Now start bpftrace by itself, and run bdevperf (note, I couldn't get the bdevperf app to terminate correctly running it as an argument to bpftrace -c):

bpftrace -e 'uprobe:/home/jimharris/git/spdk/build/examples/bdevperf:spdk_bdev_io_complete { @completed = count(); }' &
build/examples/bdevperf -w randread -o 4096 -q 16 -t 5 -c null.json
fg
^C

On my Xeon system, this drops the performance to about 1.6M IO/s.

Now with bpftime:

bpftime load bpftrace -- -e 'uprobe:/home/jimharris/git/spdk/build/examples/bdevperf:spdk_bdev_io_complete { @completed = count(); }'
# now in other terminal
bpftime start build/examples/bdevperf -- -w randread -o 4096 -q 16 -t 5 -c null.json

On my Xeon system, this further drops the performance to about 360K IO/s.

Please let us know if there is anything we can do to help explain this.

Thanks,

Jim Harris

[BUG] bpftrace doesn't terminating correctly

Describe the bug

bpftrace doesn't terminating correctly, it does't response with Ctrl C.

reproduce:

  1. Install bpftrace with package manager: apt install bpftrace
  2. Run bpftrace
sudo SPDLOG_LEVEL=error ~/.bpftime/bpftime load bpftrace -- -e 'tracepoint:syscalls:sys_enter_openat { printf("%s %s\n", comm, str(args->filename)); }'

Expected behavior

Ctrl C will perform correctly.

[FEATURE] Fix more examples

Is your feature request related to a problem? Please describe.

Current only opensnoop and malloc can work as a example. Others cannot because perf event output is not implemented yet.

We should fix it.

Describe the solution you'd like

Describe alternatives you've considered

Provide usage examples

Additional context

[BUG] Remove share_ptr in attach ctx

Describe the bug

Run benchmark in multi-thread will result in significant performance overhead:

$ LD_PRELOAD=build/runtime/agent/libbpftime-agent.so benchmark/test

Benchmarking __benchmark_test_function1 in thread 1
Average time usage 1.351700 ns, iter 100000 times

Benchmarking __benchmark_test_function2 in thread 1
Average time usage 5.864650 ns, iter 100000 times

Benchmarking __benchmark_test_function3 in thread 1
Average time usage 288.789940 ns, iter 100000 times

INFO [156136]: Global shm destructed
$ LD_PRELOAD=build/runtime/agent/libbpftime-agent.so benchmark/test 8
...
Benchmarking __benchmark_test_function3 in thread 8
Average time usage 1294.628970 ns, iter 100000 times

Benchmarking __benchmark_test_function3 in thread 3
Average time usage 1338.905010 ns, iter 100000 times

Benchmarking __benchmark_test_function3 in thread 8
Average time usage 1671.551980 ns, iter 100000 times

Benchmarking __benchmark_test_function3 in thread 8
Average time usage 1685.110550 ns, iter 100000 times

Benchmarking __benchmark_test_function3 in thread 4
Average time usage 2021.452540 ns, iter 100000 times

Benchmarking __benchmark_test_function3 in thread 8
Average time usage 1881.932160 ns, iter 100000 times

Benchmarking __benchmark_test_function3 in thread 8
Average time usage 1442.957710 ns, iter 100000 times

Benchmarking __benchmark_test_function3 in thread 1
Average time usage 1837.997850 ns, iter 100000 times

INFO [151866]: Global shm destructed

See the discussion in #97. The test driver is in #103.

Expected behavior

There should be no lock, no syscalls in our hot path. We should try our best to optimize the overhead.

[FEATURE] Support kernel ring buffer and perf event maps

Is your feature request related to a problem? Please describe.

Add kernel ring buffer and perf event support for kernel-user maps

Describe the solution you'd like

Describe alternatives you've considered

Provide usage examples

Additional context

[FEATURE] Enhance documents

Is your feature request related to a problem? Please describe.

We need more documents:

  • Description of how to use bpftime for uprobe, syscall tracing
  • The details of how bpftime works

Describe the solution you'd like

Add documents in eunomia.dev

Describe alternatives you've considered

Provide usage examples

Additional context

[FEATURE] Use kernel eBPF runtime for bpftime

Is your feature request related to a problem? Please describe.

As discussed before, we need to use kernel eBPF runtime if the prog access kernel data structures like task_struct.

We can use a syscall instead of trap, and may gain 2x less overhead because syscalls are more lightweight.

Describe the solution you'd like

See the kernel-vm branch for poc. We should find a better way to make it work with libbpf.

Describe alternatives you've considered

Provide usage examples

Additional context

[BUG]bpftime seems not catch the process context properly.

Describe the bug
For example, using bash_readline,bpftime can properly catch the bashline,but it report Failed to initialize attach context

To Reproduce

1.go to example/bash_readline
2.bpftime load ./readline
3. start another shell and ensure the pid
4. start another shell type : bpftime attach pid or bpftime start /bin/bash

Expected behavior
Running properly.

Screenshots
efaa6ffa2198fd2ad78c452a3215b09a.png

Desktop (please complete the following information):

  • OS: win11 WSL2 ubuntu:2204

Best regards.

[BUG] Fix CI for testing runtime

Describe the bug

FIx the CI

  • add boost as deps
  • fix libbpf link error
  • fix test cases

To Reproduce

Expected behavior

Screenshots

Desktop (please complete the following information):

  • OS: [e.g. Windows]
  • Version [e.g. 10]

Additional context

[FEATURE] Performance issue of frida gum

The invocation listener of frida gum has some performance issue:

##g_array_set_size

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  ns/call  ns/call  name    
 51.28      0.20     0.20                             _frida_g_array_set_size
 15.38      0.26     0.06                             _gum_function_context_begin_invocation
  7.69      0.29     0.03                             main
  5.13      0.31     0.02                             _gum_function_context_end_invocation
  5.13      0.33     0.02                             _init
  5.13      0.35     0.02                             gum_invocation_stack_push
  5.13      0.37     0.02                             plus

According to my test, call to g_array_set_size takes half time of the whole run.

With more investigation, the call to this function happens at gum_invocation_stack_push and gum_invocation_stack_pop, where frida gum uses g_array to maintain a call stack, and will push or pop elements from the stack when entering _gum_function_context_begin_invocation or _gum_function_context_begin_invocation. So g_array_set_size will be called at least twice each time the hooked function was called.

According to the implementation listed:

A memset might be used to clean the extra elements, so I think this is one of the cause of the performance issues.

Output of perf top also proves this
image

pthread_setspecific

Test with multiple threads, pthread_setspecific costs 14.09% of total instruction reads, which is even more than _gum_function_context_begin_invocation itself.

--------------------------------------------------------------------------------
Ir                      
--------------------------------------------------------------------------------
19,585,761,005 (100.0%)  PROGRAM TOTALS

--------------------------------------------------------------------------------
Ir                      file:function
--------------------------------------------------------------------------------
2,760,002,940 (14.09%)  ./nptl/./nptl/pthread_setspecific.c:pthread_setspecific@@GLIBC_2.34 [/usr/lib/x86_64-linux-gnu/libc.so.6]
2,660,000,000 (13.58%)  frida/build/tmp-linux-x86_64/frida-gum/../../../frida-gum/gum/guminterceptor.c:_gum_function_context_begin_invocation [/root/frida-gum-test/a.out]
2,180,070,385 (11.13%)  ./gmon/./gmon/mcount.c:__mcount_internal [/usr/lib/x86_64-linux-gnu/libc.so.6]
2,000,054,740 (10.21%)  ./gmon/../sysdeps/x86_64/_mcount.S:mcount [/usr/lib/x86_64-linux-gnu/libc.so.6]
1,640,000,000 ( 8.37%)  frida/build/tmp-linux-x86_64/frida-gum/../../../frida-gum/gum/guminterceptor.c:_gum_function_context_end_invocation [/root/frida-gum-test/a.out]
1,580,020,848 ( 8.07%)  ???:main::{lambda()#1}::operator()() const'2 [/root/frida-gum-test/a.out]
1,120,000,000 ( 5.72%)  ???:0x0000000005325208 [???]

pthread_setspecific and pthread_getspecific and APIs to read and write thread local variables, frida gum uses them (and g_private, which also calls the posix apis) to maintain a per-thread state, and access them frequently in invocation begin and end.

Output of perf top proves this
image

Atomic instructions

In the implementation of _gum_function_context_begin_invocation, an atomic instruction lock incl (%rax) takes about 25% time usage of the function
image

When using multiple threads (20), atomic instructions cause larger performance decrease:
image
image

[FEATURE] Merge the BPF_TYPE_UPROBE_OVERRIDE and BPF_TYPE_UPROBE

Is your feature request related to a problem? Please describe.

currently we have types for BPF_TYPE_UPROBE_OVERRIDE and BPF_TYPE_UPROBE, implemented with different mechanism. I think there is no need to keep them both. We can just make the uprobe able to use bpf_override_return.

Describe the solution you'd like

remove BPF_TYPE_UPROBE_OVERRIDE and make BPF_TYPE_UPROBE using frida replace.

Describe alternatives you've considered

Is it better not to change them? I'm not sure.

[FEATURE] Add a new test framework

We need a new framework to run unit tests and integrated tests.

Some notes:

  • Use Catch2 to run unit tests. Examples could be seen at /bpftime-verifier
  • Unit tests of a certain sub-project should be linked into a single executable
  • Tests should be full-automatically executed, meaning that things from building and spraying ebpf programs to running the executables should be done by the build system, or something else. We already have a cmake function add_ebpf_program_target that could adds a target to build ebpf program. Make use of this, such as passing the path of already-built ebpf program to the test programs through macro definition. benchmark/simple-benchmark-with-embed-ebpf-calling uses this way

[BUG] undefined reference to `shm_open'. `-lrt` missed ?

Describe the bug

make release reports error.

To Reproduce

cd bpftime && make release

Expected behavior

Screenshots

gmake[3]: Entering directory '/home/pdliyan/bpftime/build'
[ 95%] Linking CXX executable bpftime_daemon
../runtime/libruntime.a(bpftime_shm_internal.cpp.o): In function `bpftime_remove_global_shm':
bpftime_shm_internal.cpp:(.text+0x4223): undefined reference to `shm_unlink'
../runtime/libruntime.a(bpftime_shm_internal.cpp.o): In function `bpftime::bpftime_shm::bpftime_shm(char const*, bpftime::shm_open_type)':
bpftime_shm_internal.cpp:(.text+0xc9dd): undefined reference to `shm_unlink'
../runtime/libruntime.a(bpftime_shm_internal.cpp.o): In function `boost::interprocess::shared_memory_object::shared_memory_object(boost::interprocess::open_only_t, char const*, boost::interprocess::mode_t)':
bpftime_shm_internal.cpp:(.text._ZN5boost12interprocess20shared_memory_objectC2ENS0_11open_only_tEPKcNS0_6mode_tE[_ZN5boost12interprocess20shared_memory_objectC5ENS0_11open_only_tEPKcNS0_6mode_tE]+0xa1): undefined reference to `shm_open'
../runtime/libruntime.a(bpftime_shm_internal.cpp.o): In function `void boost::interprocess::ipcdetail::managed_open_or_create_impl<boost::interprocess::shared_memory_object, 16ul, true, false>::priv_open_or_create<char const*, boost::interprocess::ipcdetail::create_open_func<boost::interprocess::ipcdetail::basic_managed_memory_impl<char, boost::interprocess::rbtree_best_fit<boost::interprocess::mutex_family, boost::interprocess::offset_ptr<void, long, unsigned long, 0ul>, 0ul>, boost::interprocess::iset_index, 16ul> > >(boost::interprocess::ipcdetail::create_enum_t, char const* const&, unsigned long, boost::interprocess::mode_t, void const*, boost::interprocess::permissions const&, boost::interprocess::ipcdetail::create_open_func<boost::interprocess::ipcdetail::basic_managed_memory_impl<char, boost::interprocess::rbtree_best_fit<boost::interprocess::mutex_family, boost::interprocess::offset_ptr<void, long, unsigned long, 0ul>, 0ul>, boost::interprocess::iset_index, 16ul> >)':
bpftime_shm_internal.cpp:(.text._ZN5boost12interprocess9ipcdetail27managed_open_or_create_implINS0_20shared_memory_objectELm16ELb1ELb0EE19priv_open_or_createIPKcNS1_16create_open_funcINS1_25basic_managed_memory_implIcNS0_15rbtree_best_fitINS0_12mutex_familyENS0_10offset_ptrIvlmLm0EEELm0EEENS0_10iset_indexELm16EEEEEEEvNS1_13create_enum_tERKT_mNS0_6mode_tEPKvRKNS0_11permissionsET0_[_ZN5boost12interprocess9ipcdetail27managed_open_or_create_implINS0_20shared_memory_objectELm16ELb1ELb0EE19priv_open_or_createIPKcNS1_16create_open_funcINS1_25basic_managed_memory_implIcNS0_15rbtree_best_fitINS0_12mutex_familyENS0_10offset_ptrIvlmLm0EEELm0EEENS0_10iset_indexELm16EEEEEEEvNS1_13create_enum_tERKT_mNS0_6mode_tEPKvRKNS0_11permissionsET0_]+0x138): undefined reference to `shm_open'
bpftime_shm_internal.cpp:(.text._ZN5boost12interprocess9ipcdetail27managed_open_or_create_implINS0_20shared_memory_objectELm16ELb1ELb0EE19priv_open_or_createIPKcNS1_16create_open_funcINS1_25basic_managed_memory_implIcNS0_15rbtree_best_fitINS0_12mutex_familyENS0_10offset_ptrIvlmLm0EEELm0EEENS0_10iset_indexELm16EEEEEEEvNS1_13create_enum_tERKT_mNS0_6mode_tEPKvRKNS0_11permissionsET0_[_ZN5boost12interprocess9ipcdetail27managed_open_or_create_implINS0_20shared_memory_objectELm16ELb1ELb0EE19priv_open_or_createIPKcNS1_16create_open_funcINS1_25basic_managed_memory_implIcNS0_15rbtree_best_fitINS0_12mutex_familyENS0_10offset_ptrIvlmLm0EEELm0EEENS0_10iset_indexELm16EEEEEEEvNS1_13create_enum_tERKT_mNS0_6mode_tEPKvRKNS0_11permissionsET0_]+0x295): undefined reference to `shm_open'
bpftime_shm_internal.cpp:(.text._ZN5boost12interprocess9ipcdetail27managed_open_or_create_implINS0_20shared_memory_objectELm16ELb1ELb0EE19priv_open_or_createIPKcNS1_16create_open_funcINS1_25basic_managed_memory_implIcNS0_15rbtree_best_fitINS0_12mutex_familyENS0_10offset_ptrIvlmLm0EEELm0EEENS0_10iset_indexELm16EEEEEEEvNS1_13create_enum_tERKT_mNS0_6mode_tEPKvRKNS0_11permissionsET0_[_ZN5boost12interprocess9ipcdetail27managed_open_or_create_implINS0_20shared_memory_objectELm16ELb1ELb0EE19priv_open_or_createIPKcNS1_16create_open_funcINS1_25basic_managed_memory_implIcNS0_15rbtree_best_fitINS0_12mutex_familyENS0_10offset_ptrIvlmLm0EEELm0EEENS0_10iset_indexELm16EEEEEEEvNS1_13create_enum_tERKT_mNS0_6mode_tEPKvRKNS0_11permissionsET0_]+0x448): undefined reference to `shm_open'
collect2: error: ld returned 1 exit status
gmake[3]: *** [daemon/CMakeFiles/bpftime_daemon.dir/build.make:103: daemon/bpftime_daemon] Error 1
gmake[3]: Leaving directory '/home/pdliyan/bpftime/build'
gmake[2]: *** [CMakeFiles/Makefile2:813: daemon/CMakeFiles/bpftime_daemon.dir/all] Error 2
gmake[2]: Leaving directory '/home/pdliyan/bpftime/build'
gmake[1]: *** [Makefile:136: all] Error 2
gmake[1]: Leaving directory '/home/pdliyan/bpftime/build'
make: *** [Makefile:54: release] Error 2

Desktop (please complete the following information):

  • OS: Centos
  • Version 8
    CentOS Linux release 8.4.2105

Additional context

Not familiar with gmake, have to send this bug report.
With -lrt flag, my local test code with shm_open works okay.

#include <stdio.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <unistd.h>

int main() {
    // 创建或打开共享内存对象
    int fd = shm_open("/my_shared_memory", O_CREAT | O_RDWR, 0666);
    if (fd == -1) {
        perror("shm_open");
        return 1;
    }

    // 设置共享内存对象的大小
    if (ftruncate(fd, 1024) == -1) {
        perror("ftruncate");
        return 1;
    }

    // 将共享内存映射到进程的地址空间
    void* ptr = mmap(NULL, 1024, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
    if (ptr == MAP_FAILED) {
        perror("mmap");
        return 1;
    }

    // 在共享内存中写入数据
    sprintf((char*)ptr, "Hello, shared memory!");

    // 解除内存映射
    if (munmap(ptr, 1024) == -1) {
        perror("munmap");
        return 1;
    }

    // 关闭共享内存对象
    if (close(fd) == -1) {
        perror("close");
        return 1;
    }

    // 删除共享内存对象
    if (shm_unlink("/my_shared_memory") == -1) {
        perror("shm_unlink");
        return 1;
    }

    printf("okay\n");

    return 0;
}
$ gcc -o sha_e shm.c -lrt && ./sha_e
okay

[FEATURE] Test libbpf CO-RE for userspace

Is your feature request related to a problem? Please describe.

libbpf CO-RE should be able to work directly. We need to provide an example for it.

Describe the solution you'd like

Describe alternatives you've considered

Provide usage examples

Additional context

[BUG] daemon cannot run if /a is not exists in system

Describe the bug

The current daemon exists a hack:

	// strncpy(obj->rodata->new_uprobe_path, env.new_uprobe_path, PATH_LENTH);
	// TODO: currently using `/a` as the replacing executable path to uprobe
	// perf event in the kernel, since long strings (such as bpftime_daemon it self)
	// may break userspace memory.
	// Find a better way to solve this in the future
	strncpy(obj->rodata->new_uprobe_path, "/a", PATH_LENTH);

This will make the uprobe failed if /a is not exist as an elf.

To Reproduce

run daemon

Expected behavior

  1. can we found a better way to solve this?
  2. Maybe we can create the /a file in advance?

Screenshots

Desktop (please complete the following information):

  • OS: [e.g. Windows]
  • Version [e.g. 10]

Additional context

[BUG]Attach /bin/bash cause segmentfault when testing gethostlatancy.

Describe the bug
Attach /bin/bash cause segmentfault when testing gethostlatancy.
btw,when using attach -s, it will report [error][143913] Failed to initialize attach context

To Reproduce
1.using example/libbpf-tools/gethostlatancy
2.(root)bpftime load ./gethostlatency
3.in another shell bpftime start -s /bin/bash

Expected behavior
work properly.

Screenshots
1f638acf92623bfff88b1539a1875935.png

Desktop (please complete the following information):

  • OS: win11 WSL2 Ubuntu:2204

Best regards. :)

[FEATURE] Add syscall tracepoint support and benchmark

Is your feature request related to a problem? Please describe.

  • Add syscall tracepoint support in attach ctx
  • Add a test for syscall tracepoint
  • Add benchmark for tracepoint overhead

Describe the solution you'd like

Describe alternatives you've considered

Provide usage examples

Additional context

[BUG] uretprobe improperly when attaching two programs onto the same function

Attach the following program:

// SPDX-License-Identifier: GPL-2.0
#include <vmlinux.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
SEC("uretprobe/./example/simple_uretprobe_test/victim:simple_add")
int BPF_URETPROBE(simple_probe, long ret)
{
	bpf_printk("Ret=%ld\n", ret);

	return 0;
}

SEC("uretprobe/./example/simple_uretprobe_test/victim:simple_add")
int BPF_URETPROBE(simple_probe2, long ret)
{
	bpf_printk("Ret2=%ld\n", ret);

	return 0;
}

char LICENSE[] SEC("license") = "GPL";

victim:

#include <cstdint>
#include <fcntl.h>
#include <iostream>
#include <ostream>
#include <unistd.h>

extern "C" int64_t simple_add(int64_t a, int64_t b)
{
	return a + b;
}

int main()
{
	while (true) {
		for (int i = 1; i <= 10; i++) {
			for (int j = 1; j <= 10; j++) {
				int32_t ret = simple_add(i, j);
				std::cout << i << " + " << j << " = " << ret
					  << std::endl;
				usleep(1000 * 500);
			}
		}
	}
	return 0;
}

Will lead to the following output:

Ret2=1
Ret=2
1 + 1 = 2
Ret2=1
Ret=3
1 + 2 = 3
Ret2=1
Ret=4
1 + 3 = 4
Ret2=1
Ret=5
1 + 4 = 5

Where ret2 prints the first argument, not the return value, so it seems to be attached as uprobe

[FEATURE] Add CI for enable JIT

Is your feature request related to a problem? Please describe.

Currently, our CI will test examples, however they are only tested against interpreter. We need to have tests when JIT is enable.

Describe the solution you'd like

Add a matrix for testing JIT in github actions CI.

[FEATURE] Add unit test for daemon

Is your feature request related to a problem? Please describe.

Currently the daemon has few unit test because we have no so much time to add it.

Describe the solution you'd like

Add unit tests based on catch2 for daemon.

Describe alternatives you've considered

Provide usage examples

Additional context

[BUG] failed to start example/malloc/victim

Describe the bug

Failed to start example. Seems to be shared library path error.

To Reproduce

make release && make install && bpftime start ./example/malloc/victim
Expected behavior

Screenshots

$ bpftime start ./example/malloc/victim
[2023-11-07 06:32:03.859] [info] Entering bpftime agent
[2023-11-07 06:32:03.859] [info] Global shm constructed. shm_open_type 1 for bpftime_maps_shm
[2023-11-07 06:32:03.859] [info] Initializing agent..
[2023-11-07 06:32:03][info][422426] Executable path: /home/pdliyan/bpftime/example/malloc/victim
[2023-11-07 06:32:03][error][422426] Failed to find module base address for /lib/x86_64-linux-gnu/libc.so.6
[2023-11-07 06:32:03][info][422426] Attached 0 uprobe programs to function 0
Error: Exited with code: None

Desktop (please complete the following information):

  • OS: Centos
  • Version 8
    CentOS Linux release 8.4.2105

Additional context

My local libc.so.6 located in

$ whereis libc
libc: /usr/lib/libc.so /usr/lib64/libc.so

Something wrong with build configs?

[FEATURE] Replace libopcodes with frida-capstone

Currently, text segment transformer uses libopcodes to disassemble asm bytes, but libopcodes haa an API break in recent versions. Replace that with frida-capstone to reduce redundant configuration.

[FEATURE] Add benchmark for embed API of userspace eBPF runtime

Is your feature request related to a problem? Please describe.

Embed our vm in userspace functions and test the performance differents with uprobe.

Describe the solution you'd like

Describe alternatives you've considered

Provide usage examples

Additional context

[FEATURE] Add CI for benchmark

Is your feature request related to a problem? Please describe.

No CI for benchmark examples

Describe the solution you'd like

add a shell script to test it, and run it in ci

Describe alternatives you've considered

Provide usage examples

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.