fako1024 / slimcap Goto Github PK
View Code? Open in Web Editor NEWA high-performance network packet capture library
License: Apache License 2.0
A high-performance network packet capture library
License: Apache License 2.0
Prior to (or shortly after) initial release the documentation should be straightened out to improve usability beyond the primary use case of goProbe
.
DoD
README.md
(content + badges)examples
)Right now there is a fixed set of BPF filters in place to reject non IPv4/IPv6 packets as soon as possible in the pipeline. However, setting additional (!), manual filters could further increase usability. This could probably be as easy as:
func (s *Source) AddEBPFFilter(fd int32) error {
return setsockopt(h.fd, unix.SOL_SOCKET, unix.SO_ATTACH_BPF, unsafe.Pointer(&fd), 4)
}
The only caveat is that the predefined filters must remain in place, so they must be combined in the background.
DoD
captureSourceFilterable
or similar)In #54 I made a few attempts at generating IPv6 Jumbograms (RFC) with an extended Hop-by-hop header, but failed to do so. Some sources I tried to follow:
>>> scapy
send(IPv6(dst=’::1’,src=’fe80::dead:beef’)/IPv6ExtHdrHopByHop(options=Jumbo(jumboplen=2**30)))
Given this feature seems to be very uncommon it seems this doesn't have high priority though.
DoD
Currently, setting up a new source triggers a call to net.InterfaceByName(<name>)
, which in turn calls a stack of methods to retrieve the current list of interfaces, leading to quite expensive syscall
invocations. While being ok(ish) for single instances and / or few static interfaces, in situations where a lot of interfaces and / or a large number of instabilities (causing repeated up / down and hence re-initialization actions) this can cause significant overhead in total.
DoD
net.Interfaces
list upon initialization to reduce overhead when bulkingIn order to track side-effects of changes on performance it would be enormously helpful to have automated benchmarks / comparison via benchstat
as part of the CI pipeline (maybe not on each commit, but e.g. on filing a PR). Of course this is inherently difficult (because of reproducibility or rather lack thereof), but I've seen other projects do it.
DoD
It seems to be quite common to have gzip-compressed pcap files and tools like wireshark support opening them out of the box. Similarly it would be helpful to provide the same feature for the pcap source of slimcap, as it not only extends functionality but also allows for providing compressed test files more easily (e.g. for mock tests in els0r/goProbe#88).
Currently, when creating a new link / interface the code performs a duplicate path sanitization via filepath.Clean()
(because it opens two files in the same directory), which is unnecessary. Instead, it would be better to sanitize the name of the interface once and then call the individual functions as methods (using the sanitized / stored name of the interface).
When piping packets from one mock source to another it may happen that after completing a full block int the ring buffer there is a warning (from the crude debugging statement to be removed in #6):
mock WUT (after resetting)? 0 150
This is caused by the forced block finalization which may leave the nextOffset
value from a previous run through that block untouched, pointing to a next packet that doesn't exist. This doesn't cause catastrophic problems (because the nextPacket()
call doesn't advance beyond the expected number of packets in said block) but it's an easy fix (issue was discovered when implementing mock E2E tests in els0r/goProbe#88)
If actual packets are truncated / smaller than the defined capture snaplen, the snaplen
value in the TPacket header is set to the length of the packet, not the snaplen (as it currently the case for the mock source). This usually isn't a problem (because it will just allocate more memory than actually required) but may miss issues that would occur on an "real" capture.
Instead of having to retrieve a full packet, then call IPLayer()
it would be beneficial to provide direct access to the IPLayer without additional overhead / complexity.
As found in els0r/goProbe#188 :
panic: LinkType 823 not supported (yet)
goroutine 89 [running]:
github.com/fako1024/slimcap/link.Type.IPHeaderOffset(...)
/root/go/pkg/mod/github.com/fako1024/[email protected]/link/link.go:75
github.com/fako1024/slimcap/capture/afpacket/afring.NewSourceFromLink(0xc0007040e0, {0xc000595e58, 0x3, 0x411507?})
/root/go/pkg/mod/github.com/fako1024/[email protected]/capture/afpacket/afring/afring.go:80 +0x7ba
github.com/fako1024/slimcap/capture/afpacket/afring.NewSource({0xc00003e3a0, 0x9}, {0xc000595e58, 0x3, 0x3})
/root/go/pkg/mod/github.com/fako1024/[email protected]/capture/afpacket/afring/afring.go:62 +0xfb
github.com/els0r/goProbe/pkg/capture.glob..func1(0xc000536a80)
/goProbe/pkg/capture/capture.go:34 +0xc5
github.com/els0r/goProbe/pkg/capture.(*Capture).run(0xc000536a80)
/goProbe/pkg/capture/capture.go:162 +0x29
github.com/els0r/goProbe/pkg/capture.(*Manager).update.func2()
/goProbe/pkg/capture/capture_manager.go:367 +0x2f7
github.com/els0r/goProbe/pkg/capture.(*RunGroup).Run.func1()
/goProbe/pkg/capture/rungroup.go:25 +0x26
created by github.com/els0r/goProbe/pkg/capture.(*RunGroup).Run
/goProbe/pkg/capture/rungroup.go:24 +0x85
DoD
LinkType xxx not supported (yet)
to LinkType xxx not supported by slimcap (yet), please open an issue on GitHub
or similarAs identified in els0r/goProbe#126 the current concept of having to call Free()
after Close()
to asynchronously leads to several issues and adds (now) unnecessary complexity. Instead of having to perform two steps to close an interface the functionality should be moved to a single method (obviously that's Close()
) and instead of having to take care of waiting for the Close()
operation to have the desired effect (e.g. by handling the ErrCaptureClosed
from the caller side before calling Free()
) the method itself shall wait until the descriptor is actually closed (at which point no interaction with the source is possible in any case, so free()
ing it is safe by definition). This brings the following advantages:
free()
ing the resources from a different goroutine than the caller of Close()
), avoiding race conditionsFree()
)Close()
returns, the capture is closed, period)When using the Pipe()
construct for mock sources it would be very helpful to know when all data from the "base" source (e.g. a pcap file) has been read / consumed into the "piping" source (e.g. to allow test code to know how much data to expect in the end, which quite often is not known without actually processing it (especially pcap files do not support getting the number of captured packets without full perusal of the data).
DoD
In els0r/goProbe#88 several issues emerged when trying to implement an E2E test scenario. After some tracking down it became clear that those were encountered because the E2E test firstly created a test situation where the mock ring buffer actually ran into a wrap-around (all unit / integration tests before operated within a single "run" through the ring buffer, never exhausting its size).
This issue tracks fixes to the current implementation.
Currently, only the ring buffer capture source fully supports sending Unblock()
or Stop()
events do a potentially blocked poll on the socket. In order to match that functionality, the non-ring buffer source requires similar logic.
unix.Recvfrom()
even unblocks on Close()
(reading suggests it doesn't on Linux)In els0r/goProbe#154 some issues regarding counters were observed. For maximum safety all packet / drop counters should be unsigned (they are counters, after all).
When running the tests on a Raspberry Pi (2B) a crash is encountered when accessing the mock socket packet counter:
--- FAIL: TestBasicInteraction (0.00s)
panic: unaligned 64-bit atomic operation [recovered]
panic: unaligned 64-bit atomic operation
goroutine 6 [running]:
testing.tRunner.func1.2({0x1c36f8, 0x22cc18})
/usr/local/go/src/testing/testing.go:1545 +0x27c
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:1548 +0x42c
panic({0x1c36f8, 0x22cc18})
/usr/local/go/src/runtime/panic.go:914 +0x244
runtime/internal/atomic.panicUnaligned()
/usr/local/go/src/runtime/internal/atomic/unaligned.go:8 +0x24
runtime/internal/atomic.Xadd64(0x2000d04, 0x1)
/usr/local/go/src/runtime/internal/atomic/atomic_arm.s:258 +0x14
github.com/fako1024/slimcap/capture/afpacket/socket.(*MockFileDescriptor).IncrementPacketCount(...)
/root/Develop/slimcap/capture/afpacket/socket/socket_mock.go:48
github.com/fako1024/slimcap/capture/afpacket/socket.TestBasicInteraction(0x21320f0)
/root/Develop/slimcap/capture/afpacket/socket/socket_test.go:21 +0x14c
testing.tRunner(0x21320f0, 0x1fc90c)
/usr/local/go/src/testing/testing.go:1595 +0x124
created by testing.(*T).Run in goroutine 1
/usr/local/go/src/testing/testing.go:1648 +0x3f0
When taking a close look at the documentation at the bottom of https://pkg.go.dev/sync/atomic it is clear why:
On ARM, 386, and 32-bit MIPS, it is the caller's responsibility to arrange for 64-bit alignment of 64-bit words accessed atomically via the primitive atomic functions (types Int64 and Uint64 are automatically aligned).
The current implementation used the "old" style of incrementing a basic uint64
via atomic.AddUInt64()
etc., which are not guaranteed to be memory-aligned. Switching to atomic.UInt64
(type) referenced in the documentation should solve the issue.
DoD
atomic
packageFor readability it would be helpful to have an actual enum providing the same named packet type (a.k.a. direction w.r.t. interface) as provided by the linux kernel.
DoD
packetType == 0
for inbound)In order to support mock tests using stand-in capture sources it was necessary to add some guards against data races in the TPacket
access process. Furthermore, socket handling required some additions as well in order to support mocking in the first place. In addition to the increase in complexity these modifications also incur a (granted, small) performance penalty due to the more complex code paths and the atomic nature of some operations not required unless mocking is used.
In order to provide more control over these features / modifications, a simple build tag that allows disabling mock functionality (and all that comes with it) would be helpful. Depending on the scenario, this could also be used as a "production mode" toggle for projects using slimcap (squeezing out the last bit of performance while also removing all mock code & specifics from production binaries).
DoD
slimcap_nomock
Currently data is copied from the ring buffer, no matter if a buffer has been provided to NextPacket()
/ NextIPPacket()
. Since the data in the ring buffer is invalidated only upon the next call to those methods we can re-add a real zero-copy mode and then doing:
// Populate the packet
data = s.curTPacketHeader.payloadNoCopyAtOffset(uint32(s.ipLayerOffset), uint32(snapLen))
//s.curTPacketHeader.payloadCopyPutAtOffset(data, uint32(s.ipLayerOffset))
where
func (t tPacketHeader) payloadNoCopyAtOffset(offset, to uint32) []byte {
mac := uint32(*(*uint16)(unsafe.Pointer(&t.data[t.ppos+24])))
return t.data[t.ppos+mac+offset : t.ppos+mac+to]
}
Similar logic holds for the non-ringbuffer source, where a copy is made as well in normal mode in both aforementioned methods.
DoD
ZeroCopy()
option to both afpacket sourcesNextPacket()
/NextIPPacket()
(both afpacket sources)NextPacket()
See also els0r/goProbe#83
In the course of implementing els0r/goProbe#83 a few minor refactor / cleanup tasks came up. These are tracked here.
There seem to be two independent race conditions triggering deadlocks observed in els0r/goProbe#71 .
poll.go
(Poll()
mock path), which runs into deadlock when performing the actual read()
.goProbe
(which may or may not be related to mock only).Currently, the optimized assembler logic for PPOLL
utilizes runtime.entersyscallblock()
. The rationale behind this was to allow the scheduler to release resources faster (assuming the poll will be blocking for a long time). However, practical tests / benchmarks have shown that while under heavy load / traffic (or to be more precise: when a lot of polling has to be done) the time spent blocking is actually very short (which defies the whole concept of indicating to the scheduler that this won't be the case) and apparently the "misinformation" to the scheduler causes quite some overhead (most likely because it has to reschedule right away instead of just keeping the current resources):
.../NextPayloadZeroCopy_NoBlock_10kiBx512-4 88184865 68.01 ns/op
.../NextPayloadZeroCopy_Block10kiBx512-4 30295363 176.7 ns/op
On the other hand, if there's little traffic (and hence the blocking state might be retained a while) we a) don't really care about squeezing out that last bit of performance and b) the scheduler will release the resources after a while (O(20µs)) anyway (achieving the same state runtime.entersyscallblock()
essentially just "forces" right away).
Right now the flow for a mock interface slightly differs from a real one in the sense of opening / closing (because the source potentially has to be populated) so that in various scenarios it might be difficult to simulate repeated open()
/ close()
. For better usability, this flow could be improved.
DoD
In els0r/goProbe#80 and other places there are already some bits and peaces regarding mock capture sources, something that greatly helps testing and benchmarks alike.
Ideally this would be basic functionality on the lowest level in this package (i.e. a mockable socket that can we wrapped with an actual ring buffer (and depending on the outcome of #4 maybe even for the non-ring source).
DoD
Currently the code is structured in a manner that prefers function calls over efficiency when it comes to interacting with the current TPacketHeader
in the buffer. For example, in order to perform the zero-copy payload extraction, multiple individual calls are performed to fetch the data:
return s.curTPacketHeader.payloadNoCopyAtOffset(0, s.curTPacketHeader.snapLen()),
s.curTPacketHeader.packetType(),
s.curTPacketHeader.pktLen(),
While these calls are perfectly inlined by the compiler, each of them performs an individual unsafe
cast to the respective variable / type (which has an overhead of course). Cursory tests show that it might be more efficient to perform a single unsafe
call and cast all relevant data onto a partial header struct (allocated on the stack) that encompasses all relevant data (conveniently, the required information is basically memory aligned in a single "blob" of three uint32
(plus a superfluous one that we can skip)). On a repeated and thorough benchmark this tiny change amounts to an increase of >20% in throughput:
│ sec/op │ sec/op vs base │
CaptureMethods/NextPayloadZeroCopy-4 24.79n ± 10% 19.20n ± 0% -22.55% (p=0.000 n=25)
In addition, there are two caveats that bother me:
tPacketHeader
struct (decrementing it by one each time a packet is fetched). However, looking at the implementation here, the final packet of a block is guaranteed to have the next offset set to zero (which is an abort criterion for the loop since it is never zero otherwise).PPOLL
logic in nextPacket()
is very intricate (due to the fact that it has to handle the scenario that it was unblocked and has to continue wherever it was prior to that event). Maybe there's a way to simplify the logic and reduce both overhead and code complexity.DoD
PPOLL
logicThere are several debugging statements currently in place in the nextPacket()
method of the ring buffer source used to check for buffer under-/overruns in the wild. Prior to release those should be removed.
DoD
When analyzing the performance of packet populating in els0r/goProbe#38 it turned out that in tight loops / under heavy load there seems to be a substantial overhead when accessing interface methods from packets returned by any of the capture sources (wrapped behind the capture.Packet
interface.
It might be prudent to assess if replacing that interface with a fixed struct instead that still allows enforcing a certain "interface" without actually having to use one.
DoD
capture.Packet
with a structgoProbe
capture accordinglyRight now slimcap
fires off a quite generic message ("interface does not exist or is unsupported") if either of the two happens. For a user (especially engineers) this might be misleading (as in: one might thing we are missing for support although a configured interface simply doesn't exist). These two should be independent errors. Since we panic for unsupported interfaces (which is intentional so we get people to actually open an issue 😛) we only need to adapt the text for the already existing error.
DoD
els0r/goProbe#88 revealed yet another small bug: As it turns out the current logic in afring.go
does not release the current ring buffer packet back to the kernel as soon as the last packet has been consumed, but instead upon the subsequent call to nextPacket()
. In practice this probably won't be a problem, but especially when testing we quite often known exactly how many packets have been process and hence don't even try to perform said call to nextPacket()
just for the sake of releasing the resource. Consequently, any logic that checks e.g. if all blocks have been consumed will fail in such scenarios.
In addition, the E2E test did reveal yet another data race regarding that buffer (my analysis in #24 was not accurate - it just happens very rarely) that needs to be addressed (which in turn requires the aforementioned bug to be fixed).
DoD
Pipe()
/ run()
methods (we must not return from there until all blocks have been released back to the "kernel", i.e. the mock)DoD
The NoDrain
mock source mode generates an inherent data race w.r.t. Close()
/ Free()
and accesses to the mock ring buffer:
WARNING: DATA RACE
Write at 0x00c0000b01f0 by goroutine 29:
github.com/fako1024/slimcap/capture/afpacket/afring.(*MockSource).Free()
/home/fako/Develop/go/pkg/mod/github.com/fako1024/[email protected]/capture/afpacket/afring/afring_mock.go:340 +0x52
github.com/els0r/goProbe/pkg/capture.(*Capture).process.func1()
/home/fako/Develop/go/src/github.com/els0r/goProbe/pkg/capture/capture.go:204 +0x257
Previous read at 0x00c0000b01f0 by goroutine 28:
github.com/fako1024/slimcap/capture/afpacket/afring.(*MockSource).RunNoDrain.func1()
/home/fako/Develop/go/pkg/mod/github.com/fako1024/[email protected]/capture/afpacket/afring/afring_mock.go:253 +0x337
github.com/fako1024/slimcap/capture/afpacket/afring.(*MockSource).RunNoDrain.func2()
/home/fako/Develop/go/pkg/mod/github.com/fako1024/[email protected]/capture/afpacket/afring/afring_mock.go:262 +0x47
Goroutine 29 (running) created at:
github.com/els0r/goProbe/pkg/capture.(*Capture).process()
/home/fako/Develop/go/src/github.com/els0r/goProbe/pkg/capture/capture.go:177 +0x136
github.com/els0r/goProbe/pkg/capture.testDeadlock()
/home/fako/Develop/go/src/github.com/els0r/goProbe/pkg/capture/capture_test.go:47 +0x2fe
github.com/els0r/goProbe/pkg/capture.TestHighTrafficDeadlock()
/home/fako/Develop/go/src/github.com/els0r/goProbe/pkg/capture/capture_skiprace_test.go:23 +0x35
testing.tRunner()
/usr/local/go/src/testing/testing.go:1576 +0x216
testing.(*T).Run.func1()
/usr/local/go/src/testing/testing.go:1629 +0x47
Goroutine 28 (finished) created at:
github.com/fako1024/slimcap/capture/afpacket/afring.(*MockSource).RunNoDrain()
/home/fako/Develop/go/pkg/mod/github.com/fako1024/[email protected]/capture/afpacket/afring/afring_mock.go:236 +0x276
github.com/els0r/goProbe/pkg/capture.testDeadlock()
/home/fako/Develop/go/src/github.com/els0r/goProbe/pkg/capture/capture_test.go:44 +0x8a4
github.com/els0r/goProbe/pkg/capture.TestHighTrafficDeadlock()
/home/fako/Develop/go/src/github.com/els0r/goProbe/pkg/capture/capture_skiprace_test.go:23 +0x35
testing.tRunner()
/usr/local/go/src/testing/testing.go:1576 +0x216
testing.(*T).Run.func1()
/usr/local/go/src/testing/testing.go:1629 +0x47
DoD
As raised in els0r/goProbe#231 the socket counters do not match the number of packets actually returned to the caller via Next...()
for loopback interfaces (because we skip 50% of them).
This should be remedied by correcting the stats by that factor (caveat: What about drops?)
Since by now all (tm) data races have been found / mitigated it should be fine (and a good idea) to enable (or better: add) a dedicated data race detection test to GitHub actions.
It would be helpful to provide automatic (optimal) snaplen values (to facilitate minimal data capture) based on link types to avoid having to manually fiddle with them.
Primarily for mocking, but also in general it would be great to be able to read pcap files, convert their content to instances of capture.Packet
(e.g. by providing a new source abiding to the capture.Source
interface).
DoD
io.Reader
as source to support in-memory consumption)capture.Source
based on pcap file readerPipe()
& AddPacketFromSource()
in mock implementations to funnel packets from pacp into mockIt's a known issue that on loopback devices all packets are duplicated (for both ring buffer and non ring buffer sources) because the kernel observes all packets as both incoming and outgoing on the interface. Technically this makes sense, but it is accepted standard (e.g. in libpcap
and other packet capture solutions) to drop the outgoing duplicate packets.
In order to be in line with that standard slimcap should handle this automatically (and at the same time use the opportunity to supply packet type filtering capabilities, that way it's a win-win).
DoD
Link
by packet filter maskels0r/goProbe#88 revealed yet another small bug: As it turns out the current logic in afring.go
does not release the current ring buffer packet back to the kernel as soon as the last packet has been consumed, but instead upon the subsequent call to nextPacket()
. In practice this probably won't be a problem, but especially when testing we quite often known exactly how many packets have been process and hence don't even try to perform said call to nextPacket()
just for the sake of releasing the resource. Consequently, any logic that checks e.g. if all blocks have been consumed will fail in such scenarios.
In addition, the E2E test did reveal yet another data race regarding that buffer (my analysis in #24 was not accurate - it just happens very rarely) that needs to be addressed (which in turn requires the aforementioned bug to be fixed).
DoD
Pipe()
/ run()
methods (we must not return from there until all blocks have been released back to the "kernel", i.e. the mock)syscall.Fcntl
)Right now the mock sources have to be continuously populated with mock data via their AddPacket()
method. While closely mimicking the concept of a packet (ring) buffer this causes significant overhead (actually taking more CPU time than the actual consumption off the mock wire), skewing in particular benchmarks. In addition, it could help to provide a simpler way than manually having to populate the buffer.
DoD
Currently the project uses uber/zap
for logging in the example code (the core code is unaffected since it has no logging facilities). To remove that external dependency everything should be migrate to log/slog
(available since Go 1.21).
Issue els0r/goProbe#117 revealed that the mock packet counter has a race condition when accessing mock source packet stats while still processing. This should be easily fixable by making the counter atomic.
DoD
MockFileDescriptor
packet counter atomicRight now there seems to be only a single allocating piece of code in the afring
capture source (after initial setup of the ring buffer that is):
flat flat% sum% cum cum%
49153 79.01% 79.01% 49153 79.01% github.com/fako1024/slimcap/capture/afpacket/afring.(*ringBuffer).nextTPacketHeader /home/fako/Develop/go/pkg/mod/github.com/fako1024/[email protected]/capture/afpacket/afring/afring.go:32 (inline)
4681 7.52% 86.54% 4681 7.52% regexp/syntax.(*Regexp).Simplify /usr/local/go/src/regexp/syntax/simplify.go:87
4681 7.52% 94.06% 4681 7.52% regexp/syntax.(*Regexp).Simplify /usr/local/go/src/regexp/syntax/simplify.go:98
The line in question is simply:
func (b *ringBuffer) nextTPacketHeader() *tPacketHeader {
return &tPacketHeader{data: b.ring[b.offset*int(b.tpReq.blockSize):]}
Upon each block retrieval a new tPacketHeader
is instantiated. However, that's not really required, we can use the data itself to indicate if a new header needs to be retrieved and setup / reuse a single tPacketHeader
upon instantiation of the source.
In order to implement els0r/goProbe#88 a few extensions are required that simplify testing (or even allow some tests to begin with).
capture.Packet
methodOn very rare occasions one of the tests for the ring buffer source fails (locally, never happened in a CI pipeline so far):
=== RUN TestClosedSourceNoDrain
./github.com/fako1024/slimcap/capture/afpacket/afring/afring_mock_test.go:139:
Error Trace: ./github.com/fako1024/slimcap/capture/afpacket/afring/afring_mock_test.go:139
Error: Expected nil, but got: capture.Packet{0x3, 0xe, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x40, 0x0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x6, 0x0, 0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8, 0x15, 0xb3, 0x0, 0x50, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}
Test: TestClosedSourceNoDrain
--- FAIL: TestClosedSourceNoDrain (0.03s)
=== RUN TestFillRingBuffer
This happens after closing the mock source, where of course we'd expect no further packets to arrive:
// Close it right away
require.Nil(t, mockSrc.Close())
// Attempt to read from the source
pkt, err := mockSrc.NextPacket(nil)
require.Nil(t, pkt)
Most likely there is a bug / race somewhere in the Close()
procedure that causes the source to still be open for a brief moment after it returns. Certainly not a critical issue (since it most likely only affects the mock source and doesn't really do any harm), but annoying still.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.