Giter VIP home page Giter VIP logo

gaio's Introduction

gaio

GoDoc MIT licensed Build Status Go Report Card Coverage Statusd

gaio

Introduction

中文介绍

For a typical golang network program, you would first conn := lis.Accept() to get a connection and go func(net.Conn) to start a goroutine for handling the incoming data, then you would buf:=make([]byte, 4096) to allocate some buffer and finally waits on conn.Read(buf).

For a server holding >10K connections with frequent short messages(e.g. < 512B), cost for context switching is much more expensive than receiving message(a context switch needs at least 1000 CPU cycles or 600ns on 2.1GHz).

And by eliminating one goroutine per one connection scheme with Edge-Triggered IO Multiplexing, the 2KB(R)+2KB(W) per connection goroutine stack can be saved. By using internal swap buffer, buf:=make([]byte, 4096) can be saved(at the cost of performance).

gaio is an proactor pattern networking library satisfy both memory constraints and performance goals.

Features

  1. Tested in High Frequency Trading for handling HTTP requests for 30K~40K RPS on a single HVM server.
  2. Designed for >C10K concurrent connections, maximized parallelism, and nice single connection throughput.
  3. Read(ctx, conn, buffer) can be called with nil buffer to make use of internal swap buffer.
  4. Non-intrusive design, this library works with net.Listener and net.Conn. (with syscall.RawConn support), easy to be integrated into your existing software.
  5. Amortized context switching cost for tiny messages, able to handle frequent chat message exchanging.
  6. Application can decide when to delegate net.Conn to gaio, for example, you can delegate net.Conn to gaio after some handshaking procedure, or having some net.TCPConn settings done.
  7. Application can decide when to submit read or write requests, per-connection back-pressure can be propagated to peer to slow down sending. This features is particular useful to transmit data from A to B via gaio, which B is slower than A.
  8. Tiny, around 1000 LOC, easy to debug.
  9. Support for Linux, BSD.

Conventions

  1. Once you submit an async read/write requests with related net.Conn to gaio.Watcher, this conn will be delegated to gaio.Watcher at first submit. Future use of this conn like conn.Read or conn.Write will return error, but TCP properties set by SetReadBuffer(), SetWriteBuffer(), SetLinger(), SetKeepAlive(), SetNoDelay() will be inherited.
  2. If you decide not to use this connection anymore, you could call Watcher.Free(net.Conn) to close socket and free related resources immediately.
  3. If you forget to call Watcher.Free(net.Conn), runtime garbage collector will cleanup related system resources if nowhere in the system holds the net.Conn.
  4. If you forget to call Watcher.Close(), runtime garbage collector will cleanup ALL related system resources if nowhere in the system holds this Watcher.
  5. For connection Load-Balance, you can create multiple gaio.Watcher with your own strategy to distribute net.Conn.
  6. For acceptor Load-Balance, you can use go-reuseport as the listener.
  7. For read requests submitted with 'nil' buffer, the returning []byte from Watcher.WaitIO() is SAFE to use before next call to Watcher.WaitIO() returned.

TL;DR

package main

import (
        "log"
        "net"

        "github.com/xtaci/gaio"
)

// this goroutine will wait for all io events, and sents back everything it received
// in async way
func echoServer(w *gaio.Watcher) {
        for {
                // loop wait for any IO events
                results, err := w.WaitIO()
                if err != nil {
                        log.Println(err)
                        return
                }

                for _, res := range results {
                        switch res.Operation {
                        case gaio.OpRead: // read completion event
                                if res.Error == nil {
                                        // send back everything, we won't start to read again until write completes.
                                        // submit an async write request
                                        w.Write(nil, res.Conn, res.Buffer[:res.Size])
                                }
                        case gaio.OpWrite: // write completion event
                                if res.Error == nil {
                                        // since write has completed, let's start read on this conn again
                                        w.Read(nil, res.Conn, res.Buffer[:cap(res.Buffer)])
                                }
                        }
                }
        }
}

func main() {
        w, err := gaio.NewWatcher()
        if err != nil {
              log.Fatal(err)
        }
        defer w.Close()
	
        go echoServer(w)

        ln, err := net.Listen("tcp", "localhost:0")
        if err != nil {
                log.Fatal(err)
        }
        log.Println("echo server listening on", ln.Addr())

        for {
                conn, err := ln.Accept()
                if err != nil {
                        log.Println(err)
                        return
                }
                log.Println("new client", conn.RemoteAddr())

                // submit the first async read IO request
                err = w.Read(nil, conn, make([]byte, 128))
                if err != nil {
                        log.Println(err)
                        return
                }
        }
}

More examples

Push server package main
package main

import (
        "fmt"
        "log"
        "net"
        "time"

        "github.com/xtaci/gaio"
)

func main() {
        // by simply replace net.Listen with reuseport.Listen, everything is the same as in push-server
        // ln, err := reuseport.Listen("tcp", "localhost:0")
        ln, err := net.Listen("tcp", "localhost:0")
        if err != nil {
                log.Fatal(err)
        }

        log.Println("pushing server listening on", ln.Addr(), ", use telnet to receive push")

        // create a watcher
        w, err := gaio.NewWatcher()
        if err != nil {
                log.Fatal(err)
        }

        // channel
        ticker := time.NewTicker(time.Second)
        chConn := make(chan net.Conn)
        chIO := make(chan gaio.OpResult)

        // watcher.WaitIO goroutine
        go func() {
                for {
                        results, err := w.WaitIO()
                        if err != nil {
                                log.Println(err)
                                return
                        }

                        for _, res := range results {
                                chIO <- res
                        }
                }
        }()

        // main logic loop, like your program core loop.
        go func() {
                var conns []net.Conn
                for {
                        select {
                        case res := <-chIO: // receive IO events from watcher
                                if res.Error != nil {
                                        continue
                                }
                                conns = append(conns, res.Conn)
                        case t := <-ticker.C: // receive ticker events
                                push := []byte(fmt.Sprintf("%s\n", t))
                                // all conn will receive the same 'push' content
                                for _, conn := range conns {
                                        w.Write(nil, conn, push)
                                }
                                conns = nil
                        case conn := <-chConn: // receive new connection events
                                conns = append(conns, conn)
                        }
                }
        }()

        // this loop keeps on accepting connections and send to main loop
        for {
                conn, err := ln.Accept()
                if err != nil {
                        log.Println(err)
                        return
                }
                chConn <- conn
        }
}

Documentation

For complete documentation, see the associated Godoc.

Benchmarks

Test Case Throughput test with 64KB buffer
Description A client keep on sending 64KB bytes to server, server keeps on reading and sending back whatever it received, the client keeps on receiving whatever the server sent back until all bytes received successfully
Command go test -v -run=^$ -bench Echo
Macbook Pro 1695.27 MB/s 518 B/op 4 allocs/op
Linux AMD64 1883.23 MB/s 518 B/op 4 allocs/op
Raspberry Pi4 354.59 MB/s 334 B/op 4 allocs/op
Test Case 8K concurrent connection echo test
Description Start 8192 clients, each client send 1KB data to server, server keeps on reading and sending back whatever it received, the client keeps on receiving whatever the server sent back until all bytes received successfully.
Command go test -v -run=8k
Macbook Pro 1.09s
Linux AMD64 0.94s
Raspberry Pi4 2.09s

Regression

regression

X -> number of concurrent connections, Y -> time of completion in seconds

Best-fit values	 
Slope	8.613e-005 ± 5.272e-006
Y-intercept	0.08278 ± 0.03998
X-intercept	-961.1
1/Slope	11610
 
95% Confidence Intervals	 
Slope	7.150e-005 to 0.0001008
Y-intercept	-0.02820 to 0.1938
X-intercept	-2642 to 287.1
 
Goodness of Fit	 
R square	0.9852
Sy.x	0.05421
 
Is slope significantly non-zero?	 
F	266.9
DFn,DFd	1,4
P Value	< 0.0001
Deviation from horizontal?	Significant
 
Data	 
Number of XY pairs	6
Equation	Y = 8.613e-005*X + 0.08278

License

gaio source code is available under the MIT License.

References

Status

Stable

gaio's People

Contributors

xtaci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gaio's Issues

OOM for 3K websockets

Hi,

I am trying to implement a websocket push based server using this library and I am constantly running into OOM for large number of sockets like 3K for example. I am wondering why this is happening? Below is the code. OOM doesn't happen for small number of sockets and memory seems to be stable. It only happens for large number of sockets.

        clients := map[string]net.Conn{}
	go func() {
		for {
			select {
			case res := <-chIO: // receive IO events from watcher
				if res.Error != nil {
					log.Error().Msgf("Error receiving IO event from watcher: %v", res.Error)
					delete(clients, res.Conn.RemoteAddr().String())
					err = w.Free(res.Conn)
					if err != nil {
						log.Error().Msgf("error freeing connection: %v", err)
					}
					continue
				}
			case feed := <-out:
				f := ws.NewTextFrame(feed)
				bts := CompileHeader(f.Header)
				for index, conn := range clients {
					if conn != nil {
						err = w.Write(nil, conn, bts)
						if err != nil {
							if errors.Is(err, syscall.EPIPE) || errors.Is(err, syscall.ECONNRESET) {
								delete(clients, index)
							} else {
								log.Error().Msgf("unable to write header: %v", err)
							}
						}
						err = w.Write(nil, conn, f.Payload)
						if err != nil {
							if errors.Is(err, syscall.EPIPE) || errors.Is(err, syscall.ECONNRESET) {
								delete(clients, index)
							} else {
								log.Error().Msgf("unable to write payload: %v", err)
							}
						}
					}
				}
			case conn := <-chConn: // receive new connection events
				clients[conn.RemoteAddr().String()] = conn
			}
		}
	}()

I did some profiling with pprof and here is what I have

heap profile: 35098695: 5565869472 [80607043: 16351145656] @ heap/2
14246148: 2507322048 [14882947: 2619398672] @ 0x84d345 0x475a92 0x84b25a 0x8f7a31 0x8f785b 0x46d041
	0x84d344	github.com/xtaci/gaio.init.0.func1+0x24			/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:28
	0x475a91	sync.(*Pool).Get+0xb1					/usr/local/go/src/sync/pool.go:148
	0x84b259	github.com/xtaci/gaio.(*watcher).aioCreate+0x1b9	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:272
	0x8f7a30	github.com/xtaci/gaio.(*watcher).Write+0x7b0		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:240

14246061: 2507306736 [14882895: 2619389520] @ 0x84d345 0x475a92 0x84b25a 0x8f7845 0x8f75db 0x46d041
	0x84d344	github.com/xtaci/gaio.init.0.func1+0x24			/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:28
	0x475a91	sync.(*Pool).Get+0xb1					/usr/local/go/src/sync/pool.go:148
	0x84b259	github.com/xtaci/gaio.(*watcher).aioCreate+0x1b9	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:272
	0x8f7844	github.com/xtaci/gaio.(*watcher).Write+0x5c4		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:240

6377990: 306143520 [24276054: 1165250592] @ 0x84ca52 0x84ca3e 0x84cb0f 0x84c0b8 0x46d041
	0x84ca51	container/list.(*List).insertValue+0x4d1		/usr/local/go/src/container/list/list.go:104
	0x84ca3d	container/list.(*List).PushBack+0x4bd			/usr/local/go/src/container/list/list.go:155
	0x84cb0e	github.com/xtaci/gaio.(*watcher).handlePending+0x58e	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:563
	0x84c0b7	github.com/xtaci/gaio.(*watcher).loop+0x2d7		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:437

1: 132481024 [1: 132481024] @ 0x84b425 0x8f7a31 0x8f785b 0x46d041
	0x84b424	github.com/xtaci/gaio.(*watcher).aioCreate+0x384	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:276
	0x8f7a30	github.com/xtaci/gaio.(*watcher).Write+0x7b0		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:240

1: 67821568 [1: 67821568] @ 0x84b425 0x8f7845 0x8f75db 0x46d041
	0x84b424	github.com/xtaci/gaio.(*watcher).aioCreate+0x384	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:276
	0x8f7844	github.com/xtaci/gaio.(*watcher).Write+0x5c4		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:240

159604: 28090304 [159656: 28099456] @ 0x84d345 0x475a92 0x84b25a 0x8f7545 0x8f74e7 0x46d041
	0x84d344	github.com/xtaci/gaio.init.0.func1+0x24			/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:28
	0x475a91	sync.(*Pool).Get+0xb1					/usr/local/go/src/sync/pool.go:148
	0x84b259	github.com/xtaci/gaio.(*watcher).aioCreate+0x1b9	/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:272
	0x8f7544	github.com/xtaci/gaio.(*watcher).Free+0x2c4		/home/circleci/.go_workspace/pkg/mod/github.com/xtaci/[email protected]/watcher.go:256

`go get` error

When i try to download the package using go get -u github.com/xtaci/gaio i recieve the following output:

# github.com/xtaci/gaio
..\..\..\..\pkg\mod\github.com\xtaci\[email protected]\aio_generic.go:132:2: undefined: watcher

poller没有区分提交的是读还是写请求问题

你好! 我在看代码发现gaio会不区分读写请求,统一都向poller注册EPOLLIN和EPOLLOUT事件,这样会有个问题,比如echo server中在等客户端发数据过来前,会因为这条链接可写被不断唤醒,空转CPU,不知道你怎么看这个问题。

dara race issue

go version 1.13.8
OS: centOS 7.2

$/usr/local/go/bin/go test -race .
2020/02/17 16:56:45 accept tcp 127.0.0.1:35019: use of closed network connection
2020/02/17 16:56:45 watcher closed
2020/02/17 16:56:48 accept tcp 127.0.0.1:40316: use of closed network connection
2020/02/17 16:56:48 accept tcp 127.0.0.1:40504: use of closed network connection
2020/02/17 16:56:48 watcher closed
2020/02/17 16:56:48 accept tcp 127.0.0.1:35054: use of closed network connection
2020/02/17 16:56:48 watcher closed
2020/02/17 16:56:48 accept tcp 127.0.0.1:43625: use of closed network connection
2020/02/17 16:56:48 watcher closed
2020/02/17 16:56:50 accept tcp 127.0.0.1:36666: use of closed network connection
2020/02/17 16:56:50 watcher closed
2020/02/17 16:56:50 accept tcp 127.0.0.1:41583: use of closed network connection
2020/02/17 16:56:50 watcher closed
==================
WARNING: DATA RACE
Write at 0x00c0003c6070 by goroutine 81:
  github.com/xtaci/gaio.TestReadFull()
      /home/go/gaio/aio_test.go:417 +0x4e2
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:909 +0x199

Previous write at 0x00c0003c6070 by goroutine 90:
  github.com/xtaci/gaio.TestReadFull.func1()
      /home/go/gaio/aio_test.go:410 +0xaa

Goroutine 81 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:960 +0x651
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:1202 +0xa6
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:909 +0x199
  testing.runTests()
      /usr/local/go/src/testing/testing.go:1200 +0x521
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:1117 +0x2ff
  main.main()
      _testmain.go:112 +0x223

Goroutine 90 (finished) created at:
  github.com/xtaci/gaio.TestReadFull()
      /home/go/gaio/aio_test.go:408 +0x438
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:909 +0x199
==================
2020/02/17 16:56:51 accept tcp 127.0.0.1:37621: use of closed network connection
--- FAIL: TestReadFull (1.49s)
    aio_test.go:437: written: <nil> 104857600
    aio_test.go:439: read: <nil> 104857600
    testing.go:853: race detected during execution of test
2020/02/17 16:56:51 watcher closed
2020/02/17 16:56:52 accept tcp 127.0.0.1:37807: use of closed network connection
2020/02/17 16:56:52 watcher closed
2020/02/17 16:56:52 accept tcp 127.0.0.1:41224: use of closed network connection
2020/02/17 16:56:52 watcher closed
2020/02/17 16:56:52 accept tcp 127.0.0.1:39491: use of closed network connection
2020/02/17 16:56:52 watcher closed
2020/02/17 16:56:53 accept tcp 127.0.0.1:45587: use of closed network connection
2020/02/17 16:56:53 watcher closed
2020/02/17 16:56:53 accept tcp 127.0.0.1:33780: use of closed network connection
2020/02/17 16:56:53 watcher closed
2020/02/17 16:56:54 accept tcp 127.0.0.1:40090: use of closed network connection
2020/02/17 16:56:54 watcher closed
2020/02/17 16:56:56 accept tcp 127.0.0.1:39469: use of closed network connection
2020/02/17 16:56:56 watcher closed
2020/02/17 16:56:58 accept tcp 127.0.0.1:39311: use of closed network connection
2020/02/17 16:56:58 watcher closed
2020/02/17 16:56:58 accept tcp 127.0.0.1:42949: use of closed network connection
2020/02/17 16:56:58 watcher closed
2020/02/17 16:56:58 accept tcp 127.0.0.1:45309: use of closed network connection
2020/02/17 16:56:58 watcher closed
2020/02/17 16:56:59 accept tcp 127.0.0.1:36619: use of closed network connection
2020/02/17 16:56:59 watcher closed
2020/02/17 16:57:01 accept tcp 127.0.0.1:39795: use of closed network connection
2020/02/17 16:57:01 watcher closed
2020/02/17 16:57:02 accept tcp 127.0.0.1:45845: use of closed network connection
2020/02/17 16:57:02 watcher closed
2020/02/17 16:57:03 accept tcp 127.0.0.1:37195: use of closed network connection
2020/02/17 16:57:03 watcher closed
2020/02/17 16:57:05 accept tcp 127.0.0.1:45655: use of closed network connection
2020/02/17 16:57:05 watcher closed
2020/02/17 16:57:06 accept tcp 127.0.0.1:43165: use of closed network connection
2020/02/17 16:57:06 watcher closed
2020/02/17 16:57:08 accept tcp 127.0.0.1:33451: use of closed network connection
2020/02/17 16:57:08 watcher closed
FAIL
FAIL    github.com/xtaci/gaio   23.326s
FAIL
[email protected][16:57:09]:gaio
$

does this support UDP ?

Really interesting projects you do, learning alot here and really like your coding style !

But wondering if this support UDP ? and if not how much would it require to make a UDP version ?

tls support

心血来潮,把 nbio 又完善了下,kqueue加上了(但是我没有mac,只能用自动构建平台跑下test证明基本ok、没做其他大量测试),windows用std/net

timer还是换成了heap,之前用wheel发现,epoll_wait那个间隔太长的话定时器精度太低、并且大量事件时响应速度慢、性能差,时间太短的话tick太频繁、没数据的时候cpu有点飙

copy了标准库的tls,魔改了下支持了nbio,简单测了“粘包”之类的,应该算基本稳定了:
https://github.com/lesismal/nbio/tree/master/examples/tls

tls这个魔改,其他的异步库也可以用,但是依赖实现 net.Conn 作为 tls.Conn 的 under-layer
记得当初你还跟我说,nbio.Conn 实现了 net.Conn 没什么必要,现在看,算是无心插柳了,哈哈哈
实现net.Conn,给应用层SetDeadline还是挺有必要的
并且,我又撸了个http 1.x parser,现在 http server 也支持了
然后又撸了个 webwocket upgrader,websocket 也支持了

缓缓节奏,后面想把 http 2.0 也支持上,标准库的 tls 比较浪费,有档期也想重写一份

Detach a connection from watcher.

In a project, I need to detach a connection from the watcher and send it to another process, so I modified the flow of opDelete to support a new action opDetach: liukun@0a9a8f6

Do you think it can be added as a feature of gaio? If so, I'll commit a PR later with more proper designs than the above commit.

关于 loop 函数中 copy(pending, w.pending) 的疑问

请问

// func (w *watcher) loop()
if cap(pending) < cap(w.pending) {
    pending = make([]*aiocb, 0, cap(w.pending))
}
pending = pending[:len(w.pending)]
copy(pending, w.pending)

既然 copy 的长度是 len(w.pending) ,为什么这里的第一句是以 cap(w.pending) 来扩容的?

Memory consumption benchmark?

@xtaci Thanks a lot for the great work as usual! 👍

Since this project is primarily aimed at reducing memory consumption and context switching, I'd suggest that it would be very helpful to add a side-by-side comparison of memory usage for the two test cases shown in README.

go 1.17在linux下编译不过

复现环境:
CentOS release 6.3 (Final)
go version go1.15.2 linux/amd64


报错信息如下:

github.com/xtaci/gaio

cgo: gcc did not produce error at completed:1
on input:

#line 1 "cgo-builtin-prolog"
#include <stddef.h> /* for ptrdiff_t and size_t below */

/* Define intgo when compiling with GCC. */
typedef ptrdiff_t intgo;

#define GO_CGO_GOSTRING_TYPEDEF
typedef struct { const char *p; intgo n; } GoString;
typedef struct { char *p; intgo n; intgo c; } GoBytes;
GoString GoString(char *p);
GoString GoStringN(char *p, int l);
GoBytes GoBytes(void *p, int n);
char *CString(GoString);
void *CBytes(GoBytes);
void *_CMalloc(size_t);

attribute ((unused))
static size_t _GoStringLen(GoString s) { return (size_t)s.n; }

attribute ((unused))
static const char *_GoStringPtr(GoString s) { return s.p; }
#line 5 "/home/batsdk/code/baidu/personal-code/crab-console/vendor/github.com/xtaci/gaio/affinity_linux.go"

#define _GNU_SOURCE
#include <sched.h>
#include <pthread.h>

void lock_thread(int cpuid) {
pthread_t tid;
cpu_set_t cpuset;

tid = pthread_self();
CPU_ZERO(&cpuset);
CPU_SET(cpuid, &cpuset);
pthread_setaffinity_np(tid, sizeof(cpu_set_t), &cpuset);

}

#line 1 "cgo-generated-wrapper"
#line 1 "not-declared"
void __cgo_f_1_1(void) { typeof(int) *__cgo_undefined__1; }
#line 1 "not-type"
void __cgo_f_1_2(void) { int *__cgo_undefined__2; }
#line 1 "not-int-const"
void __cgo_f_1_3(void) { enum { __cgo_undefined__3 = (int)*1 }; }
#line 1 "not-num-const"
void __cgo_f_1_4(void) { static const double __cgo_undefined__4 = (int); }
#line 1 "not-str-lit"
void __cgo_f_1_5(void) { static const char __cgo_undefined__5[] = (int); }
#line 2 "not-declared"
void __cgo_f_2_1(void) { typeof(lock_thread) *__cgo_undefined__1; }
#line 2 "not-type"
void __cgo_f_2_2(void) { lock_thread *__cgo_undefined__2; }
#line 2 "not-int-const"
void __cgo_f_2_3(void) { enum { __cgo_undefined__3 = (lock_thread)*1 }; }
#line 2 "not-num-const"
void __cgo_f_2_4(void) { static const double __cgo_undefined__4 = (lock_thread); }
#line 2 "not-str-lit"
void __cgo_f_2_5(void) { static const char __cgo_undefined__5[] = (lock_thread); }
#line 1 "completed"
int __cgo__1 = __cgo__2;

full error output:
cc1: error: unrecognized command line option "-fno-lto"

epoll边缘触发数据未读完的问题

epoll边缘触发的话 有读事件的话 一次需要读取完 目前实现存在读取缓冲区太小时数据没读取完的情况 有做改进吗?
得设置个标志,超过缓存大小的话,后面得主动再次读取 即使缓冲区很大也不行 因为客户端可能一直不停的发

Watcher should not be limited to net.Conn, file descriptor maybe better

I love Linux, everything is fd in Linux, so I can watch network connection, file, event fd, timer fd, signal fd etc. in one epoll/select loop.

I am designing a proxy program which transporting data between network connection and tun device, it is a simple feature in C language, but I cannot find a good golang framework to do it. evio and gnet are only focus on server side, gaio is the best match one.

But gaio.Watcher only allow net.Conn parameter(I know golang can wrap os.File with net.Conn), how about event fd, timer fd and signal fd?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.