Giter VIP home page Giter VIP logo

cz-nic / knot-resolver Goto Github PK

View Code? Open in Web Editor NEW
351.0 28.0 59.0 32.62 MB

Knot Resolver - resolve DNS names like it's 2024

Home Page: https://www.knot-resolver.cz/

License: Other

Makefile 0.08% C 57.60% Shell 2.26% Lua 15.10% Python 18.08% Go 0.21% Emacs Lisp 0.01% JavaScript 0.79% CSS 0.48% Smarty 0.11% Dockerfile 0.11% Meson 2.00% Roff 0.10% Jinja 2.86% Nix 0.19%
dns dnssec knot-resolver dns-over-tcp dns-over-tls dns-over-https dns-resolver dns-cache

knot-resolver's Introduction

Knot Resolver

Build Status Coverage Status Packaging status

Knot Resolver is a caching full resolver implementation written in C and LuaJIT, both a resolver library and a daemon. The core architecture is tiny and efficient, and provides a foundation and a state-machine like API for extensions. There are three modules built-in - iterator, validator, cache, and a few more are loaded by default. Most of the rich features are written in Lua(JIT) and C. Batteries are included, but optional.

The LuaJIT modules, support DNS privacy and DNSSEC, and persistent cache with low memory footprint make it a great personal DNS resolver or a research tool to tap into DNS data. TL;DR it's the OpenResty of DNS.

Strong filtering rules, and auto-configuration with etcd make it a great large-scale resolver solution.

The server adopts a different scaling strategy than the rest of the DNS recursors - no threading, shared-nothing architecture (except MVCC cache that may be shared) that allows you to pin instances on available CPU cores and grow by self-replication. You can start and stop additional nodes depending on the contention without downtime, which is by default automated by the included manager.

It also has strong support for DNS over TCP, notably TCP Fast-Open, query pipelining and deduplication, and response reordering.

Packages

The latest stable packages for various distributions are available in our upstream repository. Follow the installation instructions to add this repository to your system.

Knot Resolver is also available from the following distributions' repositories.

Building from sources

Knot Resolver mainly depends on Knot DNS libraries, LuaJIT, and libuv. See the Building project documentation page for more information.

Docker image

This is simple and doesn't require any dependencies or system modifications, just run:

$ docker run -Pit cznic/knot-resolver

The images are meant as an easy way to try knot-resolver, and they're not designed for production use.

Running

The project builds a resolver library in the lib directory, and a daemon in the daemon directory. It requires no configuration or parameters to run a server on localhost.

$ kresd

See the documentation at knot-resolver.cz/documentation/latest for more options.

Contacting us

knot-resolver's People

Contributors

alesmrazek avatar alexforster avatar andir avatar catap avatar cscm avatar daurnimator avatar davidjb avatar dkg avatar facboy avatar fcelda avatar felixonmars avatar hasnat avatar hectorm avatar helb avatar jirutka avatar jruzicka-nic avatar jsoref avatar karel-slany-nic-cz avatar libor-peltan-cznic avatar lotia avatar nicki-krizek avatar paulosv avatar paveldol avatar pspacek avatar salzmdan avatar spiffyk avatar ulrichwisser avatar unicycle2 avatar vavrusa avatar vcunat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

knot-resolver's Issues

Building from source doesn't get library link correct

Greetings. In /root/Source/knot-resolver, I give the command:
make install PREFIX=/root/Target/knot-resolver
/root/Target/knot-resolver/sbin/kresd is built. /root/Target/knot-resolver/lib shows:
drwxr-xr-x 4 root root 4.0K Jul 29 11:20 kdns_modules
lrwxrwxrwx 1 root root 12 Jul 29 11:20 libkres.so -> libkres.so.1
-rwxr-xr-x 1 root root 214K Jul 29 11:20 libkres.so.1
drwxr-xr-x 2 root root 4.0K Jul 29 11:20 pkgconfig
However, /root/Target/knot-resolver/sbin/kresd -h returns:
/root/Target/knot-resolver/sbin/kresd: error while loading shared libraries: libkres.so.1: cannot open shared object file: No such file or directory

Clues? Fixes?

How can I forward all request to upstream

I'm using Ubuntu 18.04 64bit
I'm using Dns ove TLS

-- net = { net.ens160 }
net.tls('/etc/letsencrypt/live/dns.xxx.com/fullchain.pem','/etc/letsencrypt/live/dns.xxx.com/privkey.pem')
net.listen('::', 853)
net.listen('0.0.0.0', 853)

How can I set up all incoming request go to upstream 8.8.8.8
I can't see the example on the document

modules.load('policy');
-- Forward all queries (to public resolvers google DNS)
policy:add(policy.all(policy.FORWARD('8.8.8.8')))

broken makefile dependencies

/usr/lib64/gcc/x86_64-suse-linux/5/../../../../x86_64-suse-linux/bin/ld: cannot find -lkres
collect2: error: ld returned 1 exit status
modules/kmemcached/kmemcached.mk:4: recipe for target 'modules/kmemcached/kmemcached.so' failed
make: *** [modules/kmemcached/kmemcached.so] Error 1

works without -j 4

Segmentation fault with new version 5.4.1

Hi
I build new version + knot-dns 3.1.1
but after try to run :

[ 190.210560] kresd[3304]: segfault at 0 ip 0000000000000000 sp 00007ffdfb797980 error 14 cpu 3 in kresd[400000+8000]
[ 190.236458] Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.
[ 216.630076] kresd[3462]: segfault at 0 ip 0000000000000000 sp 00007fff699954a0 error 14 cpu 3 in kresd[400000+8000]
[ 216.657033] Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.

[BUG] v5.2.1 Issues loading dnstap module

Hello Knot team,

Great job on a performant DNS resolver! This is an amazingly easy to install service with loads of different options!

However, we are running into an issue on v5.2.1 when attempting to load the dnstap module :(

We think it might be tied to some build dependencies that are not being pulled in. We are using ubuntu18.04 and we are installing using apt.

Here is the error we are getting:

error: module 'kres_modules.dnstap' not found:
	no field package.preload['kres_modules.dnstap']
	no file '/usr/lib/knot-resolver/kres_modules/dnstap.lua'
	no file '/usr/lib/knot-resolver/kres_modules/dnstap/init.lua'
	no file './kres_modules/dnstap.lua'
	no file '/usr/share/luajit-2.1.0-beta3/kres_modules/dnstap.lua'
	no file '/usr/local/share/lua/5.1/kres_modules/dnstap.lua'
	no file '/usr/local/share/lua/5.1/kres_modules/dnstap/init.lua'
	no file '/usr/share/lua/5.1/kres_modules/dnstap.lua'
	no file '/usr/share/lua/5.1/kres_modules/dnstap/init.lua'
	no file '/usr/lib/knot-resolver/kres_modules/dnstap.so'
	no file './kres_modules/dnstap.so'
	no file '/usr/local/lib/lua/5.1/kres_modules/dnstap.so'
	no file '/usr/lib/x86_64-linux-gnu/lua/5.1/kres_modules/dnstap.so'
	no file '/usr/local/lib/lua/5.1/loadall.so'
	no file '/usr/lib/knot-resolver/kres_modules.so'
	no file './kres_modules.so'
	no file '/usr/local/lib/lua/5.1/kres_modules.so'
	no file '/usr/lib/x86_64-linux-gnu/lua/5.1/kres_modules.so'
	no file '/usr/local/lib/lua/5.1/loadall.so'
[system] failed to load module 'dnstap'
error occurred here (config filename:lineno is at the bottom, if config is involved):
stack traceback:
	[C]: in function 'load'
	[string "return table_print(modules.load('dnstap'))"]:1: in main chunk

When we attempt to build the docker container from this repo

Message: --- dnstap module dependencies ---
Dependency libprotobuf-c found: YES 1.3.1
Dependency libfstrm found: YES 0.4.0
Program protoc-c found: NO
Message: ----------------------------------

When we add protobuf-c-compiler to the Docker build (line 11) we get a more positive result

Message: --- dnstap module dependencies ---
Dependency libprotobuf-c found: YES 1.3.1
Dependency libfstrm found: YES 0.4.0
Program protoc-c found: YES (/usr/bin/protoc-c)
Message: ----------------------------------

Is there something else we might be missing?

Thanks for taking the time to look over this issue. Have a great day!

kresd poor performance when run as a Docker container

Dear CZ-NIC fellows,

I'm writing you on behalf of Whalebone organization regarding a performance issue I have been experiencing with Knot Resolver in Docker container.

Without Docker

If I clone and compile Knot-resolver's master on a 2-cores Fedora 24 VM with the usual default -O2 and run it with the default config.personal, I can easily get average domain resolution time per domain around ~300ms; having cold cache on start and using a 250 records long list of top Alexa domains.

With Docker

When I grab either your Docker image based on Alpine Linux or my own Docker image based on Fedora 24 and run it on the very same, aforementioned, Fedora 24 VM with 2 cores, (docker 1.10.3, build 19b5791/1.10.3) I cannot get under ~1100ms average resolution time per single domain (the same aforementioned list).

This is not any kind of weird stress test, I use the ancient namebench 1.3 where a 1 thread queries the resolver one record at a time.

Expected results and Unbound

Unbound resolver performs virtually the same, regardless of whether it's being run as a Docker container or a plain process on the same host.

Debugging

CPU cycles consumption nor memory comes into the play, everything seems to be still. There is nothing evil going on in iotop and despite Valgrind Callgrind shows hot spots in Knot related to ld and symbols lookup for Lua; it appears to have no connection to the problem on hand.
I suspect that Knot treats sockets in a way that's hard to swallow for my kernel's (4.6.3-300.fc24.x86_64) networking stack while operating Docker namespaces. I did try both with and without various -f settings and SO_REUSEPORT, noting seems to help Knot's performance in Docker.

support: created commented default configuration

While personal resolver requires zero configuration, there should be a documented config for multi-user deployments with notes on:

  • cache size and relation to available disk space / RAM
  • cache pruning strategy
  • forking-out clients
  • recommended modules (hints, policy, cachectl)

The policy FORWARD works only with first IP when I use list of IPs for forwarding

For example I set policy policy.add(policy.all(policy.FORWARD('172.30.2.4','8.8.8.8')))
172.30.2.4 is unreachable host
During request resolver marks first host as bad:
[20395][wrkr] => server: '172.30.2.4' flagged as 'bad'
and resolver doesn't try to second dns 8.8.8.8. I am getting timeout:

dig @127.0.0.1 google.com

; <<>> DiG 9.11.0-P3 <<>> @127.0.0.1 google.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

Why policy.FORWARD can have multiple IPs if it doesn't use them?

Walled Garden capability

Sorry if this has been asked before but I haven't found any documentation or reference: Can knot-resolver act as a walled garden? My specific use case is to limit access to to a limited whitelist of sites (walled garden) during work hours and then regular OpenDNS after hours.

kr_request's qsource.addr is sometimes NULL

Dear Knot-Resolver team,

I'm writing you on behalf of Whalebone organization regarding a possible bug in resolved structure data passed on to consuming modules.

Synopsis

in my simple module, I have a collect function, registered as:

KR_EXPORT
const knot_layer_api_t *sinkit_layer(struct kr_module *module) {
    static knot_layer_api_t _layer = {
        .finish = &collect,
    };
    /* Store module reference */
    _layer.data = module;
    return &_layer;
}

In the aforementioned collect function, I retrieve client's address in this way:

struct kr_request *param = ctx->data;
struct kr_rplan *rplan = &param->rplan;
if(!param->qsource.addr) {
    ERR_MSG("Query source address is NULL. Skipping.\n");
    return ctx->state;
}
const struct sockaddr *sa = param->qsource.addr;
struct sockaddr_in *sin = (struct sockaddr_in *) sa;

const char *client_address =  inet_ntoa(sin->sin_addr);
DEBUG_MSG("Client IPv4 address: %s\n", client_address);

Problem

During Alexa Top 2000 resolving (with -f 1, empty cache, 1 client, on fedora 24 VM, do Docker, nothing weird),
I used to get around 30 SEGFAULTs on inet_ntoa(sin->sin_addr) due to sin->sin_addr being NULL. As you might see above, I have introduced a workaround if(!param->qsource.addr) { that prevents the SEGFAULT now.

See the crash below:

#0  0x00007ffff3bc421e in collect (ctx=0x7fffffffd6e0) at modules/sinkit/sinkit.c:145
#1  0x00007ffff7b7c951 in kr_resolve_finish (request=request@entry=0x5555557ddf90, state=state@entry=4) at lib/resolve.c:803
#2  0x000055555555e76e in qr_task_finalize (task=0x5555557ddf90, state=4) at daemon/worker.c:662
#3  0x000055555555ed82 in qr_task_step (task=0x5555557ddf90, packet_source=packet_source@entry=0x0, packet=0x555555846210) at daemon/worker.c:692
#4  0x000055555555fa49 in worker_resolve (worker=worker@entry=0x7ffff7e66010, query=<optimized out>, options=options@entry=2048, on_complete=<optimized out>, 
    baton=<optimized out>) at daemon/worker.c:973
#5  0x0000555555562403 in wrk_resolve (L=0x40000378) at daemon/bindings.c:990
#6  0x00007ffff6801ca6 in lj_BC_FUNCC () from /lib64/libluajit-5.1.so.2
#7  0x00007ffff6847ad0 in lua_pcall () from /lib64/libluajit-5.1.so.2
#8  0x000055555555bfcc in engine_pcall (L=<optimized out>, argc=<optimized out>) at daemon/engine.c:618
#9  0x0000555555560e0e in execute_callback (L=L@entry=0x40000378, argc=argc@entry=1) at daemon/bindings.c:704
#10 0x0000555555561f33 in event_callback (timer=0x5555558460d0) at daemon/bindings.c:722
#11 0x00007ffff72c003b in uv.run_timers () from /lib64/libuv.so.1
#12 0x00007ffff72b4e7c in uv_run () from /lib64/libuv.so.1
#13 0x000055555555a30a in run_worker (leader=true, ipc_set=0x7fffffffdaa0, engine=0x7fffffffde70, loop=0x7ffff74cd220) at daemon/main.c:360
#14 main (argc=<optimized out>, argv=<optimized out>) at daemon/main.c:566
138 static int collect(knot_layer_t *ctx) {
139    struct kr_request *param = ctx->data;
140    struct kr_rplan *rplan = &param->rplan;
141
142    const struct sockaddr *sa = param->qsource.addr;
143    struct sockaddr_in *sin = (struct sockaddr_in *) sa;
144
145    const char *client_address =  inet_ntoa(sin->sin_addr);
146    DEBUG_MSG("Client IPv4 address: %s\n", client_address);
(gdb) print *param
$12 = {ctx = 0x7fffffffde70, answer = 0x5555557df1f0, current_query = 0x0, qsource = {key = 0x0, addr = 0x0, dst_addr = 0x0, packet = 0x0}, upstream = {rtt = 0, addr = 0x0}, 
  options = 6, state = 4, authority = {at = 0x0, len = 0, cap = 0}, additional = {at = 0x0, len = 0, cap = 0}, rplan = {pending = {at = 0x5555557df2d8, len = 0, cap = 5}, 
    resolved = {at = 0x5555557dfe48, len = 1, cap = 5}, request = 0x5555557ddf90, pool = 0x5555557de050}, pool = {ctx = 0x5555557ddf20, alloc = 0x7ffff7b80ca0 <mp_alloc>, 
    free = 0x0}}

Question

Is it expected that this might happen? Should I be checking for any flags that would tell me beforehand that something has failed and I shouldn't process the data?

If you find it helpful, I can put together a small reproducer module, exercising just this particular piece of logic.

Thank you for feedback.

Cheers
-K-

Running resolver as service on macos will fail to run as a service

Hey there, I am trying to use knot-resolver on macos as a service through homebrew but it fails because stdin is /dev/null.
To reproduce it you can check with this:

make && sudo python -c 'import subprocess; subprocess.call(["daemon/kresd"], stdin=subprocess.DEVNULL)'

and this will be the output

[system] error error: couldn't start event poller
# I changed the code to also print the error that happened
Return of uv_strerror(ret): invalid argument%    

but that is funny that it only happens when install http lib with luarocks using this luarocks --lua-dir=/usr/local/opt/[email protected] --tree /usr/local/ install http CRYPTO_DIR=/usr/local/opt/openssl OPENSSL_DIR=/usr/local/opt/openssl/
i found out if i remove this guard or make it go to the else it spawns the daemon fine.

Let me know if you need more information to find out what the bug is

daemon: track dhcp and reconfigure as validating stub / resolver

There should be a module to track changes in the network and environment to detect when the resolver is in an:

  • Environment that blocks DNS queries altogether (and revert to stub mode)
  • Environment with DNSSEC-unaware resolver (do validation)
  • Open environment (full recursive resolver)

This would make it as painless as possible for the end users with frequent network transitions (hotel wifi, workplace, home, ...)

Reading the DNS root zone file will result in invalid syntax

When obtaining the DNS root zone file as below and loading with knot-resolver, the following error is output.

  1. Obtain the DNS root zone file
    $ dig @e.root-servers.net . ns | sudo tee /etc/knot-resolver/named.root

  2. Execute with the following parameters
    $ sudo -u knot-resolver kresd -c /etc/knot-resolver/kresd.conf -v -f 1 -k /etc/knot-resolver/root.key

  3. An error message is output

[system] bind to 'fe80::21e:6ff:fe33:8502@9953' Invalid argument
[     ][hint] /etc/knot-resolver/named.root:2: invalid syntax
[ ta ] warning: overriding previously set trust anchors for .

In kresd.conf, the following parameters are set around root.config.

--root.hints
hints.config("/etc/knot-resolver/named.root")
trust_anchors.config("/etc/knot-resolver/root.keys")

The contents of named.root are as follows.

$ cat named.root 

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @e.root-servers.net . ns
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24742
;; flags: qr aa rd; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 27
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;.                              IN      NS

;; ANSWER SECTION:
.                       518400  IN      NS      a.root-servers.net.
.                       518400  IN      NS      b.root-servers.net.
.                       518400  IN      NS      c.root-servers.net.
.                       518400  IN      NS      d.root-servers.net.
.                       518400  IN      NS      e.root-servers.net.
.                       518400  IN      NS      f.root-servers.net.
.                       518400  IN      NS      g.root-servers.net.
.                       518400  IN      NS      h.root-servers.net.
.                       518400  IN      NS      i.root-servers.net.
.                       518400  IN      NS      j.root-servers.net.
.                       518400  IN      NS      k.root-servers.net.
.                       518400  IN      NS      l.root-servers.net.
.                       518400  IN      NS      m.root-servers.net.

;; ADDITIONAL SECTION:
a.root-servers.net.     518400  IN      AAAA    2001:503:ba3e::2:30
b.root-servers.net.     518400  IN      AAAA    2001:500:200::b
c.root-servers.net.     518400  IN      AAAA    2001:500:2::c
d.root-servers.net.     518400  IN      AAAA    2001:500:2d::d
e.root-servers.net.     518400  IN      AAAA    2001:500:a8::e
f.root-servers.net.     518400  IN      AAAA    2001:500:2f::f
g.root-servers.net.     518400  IN      AAAA    2001:500:12::d0d
h.root-servers.net.     518400  IN      AAAA    2001:500:1::53
i.root-servers.net.     518400  IN      AAAA    2001:7fe::53
j.root-servers.net.     518400  IN      AAAA    2001:503:c27::2:30
k.root-servers.net.     518400  IN      AAAA    2001:7fd::1
l.root-servers.net.     518400  IN      AAAA    2001:500:9f::42
m.root-servers.net.     518400  IN      AAAA    2001:dc3::35
a.root-servers.net.     518400  IN      A       198.41.0.4
b.root-servers.net.     518400  IN      A       192.228.79.201
c.root-servers.net.     518400  IN      A       192.33.4.12
d.root-servers.net.     518400  IN      A       199.7.91.13
e.root-servers.net.     518400  IN      A       192.203.230.10
f.root-servers.net.     518400  IN      A       192.5.5.241
g.root-servers.net.     518400  IN      A       192.112.36.4
h.root-servers.net.     518400  IN      A       198.97.190.53
i.root-servers.net.     518400  IN      A       192.36.148.17
j.root-servers.net.     518400  IN      A       192.58.128.30
k.root-servers.net.     518400  IN      A       193.0.14.129
l.root-servers.net.     518400  IN      A       199.7.83.42
m.root-servers.net.     518400  IN      A       202.12.27.33

;; Query time: 84 msec
;; SERVER: 2001:500:a8::e#53(2001:500:a8::e)
;; WHEN: Fri Jul 28 23:30:05 JST 2017
;; MSG SIZE  rcvd: 811

The version is as follows.

$ kresd -V
Knot DNS Resolver, version 1.3.2

Segmentation fault when executing stats.get() via kresc without a key as parameter

  1. Start kresd 1.3.0 built with debug flags with gdb --args /usr/local/sbin/kresd --config=/dev/null --verbose --forks=1 /run/knot-resolver/cache or valgrind --leak-check=full --track-origins=yes /usr/local/sbin/kresd --config=/dev/null --verbose --forks=1 /run/knot-resolver/cache
  2. Connect to kresd with kresc: kresc /run/knot-resolver/cache/tty/*
  3. Execute modules.load('stats'); within kresc
  4. Execute stats.get() within kresc
  5. kresd segfaults with the following backtrace
(gdb) bt
#0  0x00007ffff5e2250e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007fffed9976b5 in stats_get (env=0x7fffffffe5b0, module=0x55555579c3f0, args=0x0) at modules/stats/stats.c:282
#2  0x000055555555e7df in l_trampoline (L=0x40000378) at daemon/engine.c:529
#3  0x00007ffff67d8b27 in ?? () from /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2
#4  0x00007ffff682508d in lua_pcall () from /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2
#5  0x000055555555f34e in engine_pcall (L=0x40000378, argc=2) at daemon/engine.c:721
#6  0x000055555555f3c6 in engine_cmd (L=0x40000378, str=0x55555579f0b0 "stats.get()", raw=false) at daemon/engine.c:736
#7  0x000055555556a2df in tty_process_input (stream=0x55555579efb0, nread=12, buf=0x7fffffffaf30) at daemon/main.c:122
#8  0x00007ffff729c7dd in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#9  0x00007ffff729cf24 in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#10 0x00007ffff72a1ef8 in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#11 0x00007ffff7293934 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
#12 0x000055555556af88 in run_worker (loop=0x7ffff74ac900, engine=0x7fffffffe5b0, ipc_set=0x7fffffffe7b0, leader=true, control_fd=-1) at daemon/main.c:408
#13 0x000055555556c1c0 in main (argc=5, argv=0x7fffffffeb98) at daemon/main.c:748

Full backtrace:

(gdb) bt full
#0  0x00007ffff5e2250e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#1  0x00007fffed9976b5 in stats_get (env=0x7fffffffe5b0, module=0x55555579c3f0, args=0x0) at modules/stats/stats.c:282
        i = 0
        data = 0x5555557afa20
        ret = 0x55555579d420 "x\313\022\366\377\177"
        val = 0x40021170
#2  0x000055555555e7df in l_trampoline (L=0x40000378) at daemon/engine.c:529
        prop = 0x7fffed997649 <stats_get>
        ret = 0x0
        root_node = 0x40021170
        module = 0x55555579c3f0
        callback = 0x7fffed997649 <stats_get>
        engine = 0x7fffffffe5b0
        args = 0x0
        cleanup_args = 0x0
#3  0x00007ffff67d8b27 in ?? () from /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2
No symbol table info available.
#4  0x00007ffff682508d in lua_pcall () from /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2
No symbol table info available.
#5  0x000055555555f34e in engine_pcall (L=0x40000378, argc=2) at daemon/engine.c:721
No locals.
#6  0x000055555555f3c6 in engine_cmd (L=0x40000378, str=0x55555579f0b0 "stats.get()", raw=false) at daemon/engine.c:736
No locals.
#7  0x000055555556a2df in tty_process_input (stream=0x55555579efb0, nread=12, buf=0x7fffffffaf30) at daemon/main.c:122
        engine = 0x7fffffffe5b0
        ret = 32767
        message = 0x7fffffffaf30 "\260\360yUUU"
        delim = 0x7ffff74ac900 "\020\340\361\367\377\177"
        is_binary = true
        L = 0x40000378
        fp_out = 0x7fffffffb210
        cmd = 0x55555579f0b0 "stats.get()"
        out = 0x5555557af0c0
        stream_fd = 17
#8  0x00007ffff729c7dd in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#9  0x00007ffff729cf24 in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#10 0x00007ffff72a1ef8 in ?? () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#11 0x00007ffff7293934 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#12 0x000055555556af88 in run_worker (loop=0x7ffff74ac900, engine=0x7fffffffe5b0, ipc_set=0x7fffffffe7b0, leader=true, control_fd=-1) at daemon/main.c:408
        sock_file = 0x55555577eaa0 "tty/13602"
        pipe = {data = 0x0, loop = 0x7ffff74ac900, type = UV_NAMED_PIPE, close_cb = 0x0, handle_queue = {0x55555579efd0, 0x55555579ee20}, u = {fd = 31, reserved = {0x1f, 0x21, 0x7ffff6a465e0, 0x400019a0}}, next_closing = 0x0, flags = 24576, write_queue_size = 0, alloc_cb = 0x0, read_cb = 0x0, connect_req = 0x0, shutdown_req = 0x0, io_watcher = {
            cb = 0x7ffff729d4d0, pending_queue = {0x7fffffffe380, 0x7fffffffe380}, watcher_queue = {0x7fffffffe390, 0x7fffffffe390}, pevents = 1, events = 1, fd = 16}, write_queue = {0x7fffffffe3b0, 0x7fffffffe3b0}, write_completed_queue = {0x7fffffffe3c0, 0x7fffffffe3c0}, connection_cb = 0x55555556a51c <tty_accept>, delayed_error = 0,
          accepted_fd = -1, queued_fds = 0x0, ipc = 0, pipe_fname = 0x55555577eb60 "tty/13602"}
#13 0x000055555556c1c0 in main (argc=5, argv=0x7fffffffeb98) at daemon/main.c:748
        forks = 1
        addr_set = {at = 0x0, len = 0, cap = 0}
        tls_set = {at = 0x0, len = 0, cap = 0}
        fd_set = {at = 0x0, len = 0, cap = 0}
        tls_fd_set = {at = 0x0, len = 0, cap = 0}
        keyfile = 0x0
        moduledir = 0x5555555732e6 "/usr/local/lib/kdns_modules"
        config = 0x7fffffffede7 "/dev/null"
        control_fd = -1
        c = -1
        li = 6
        ret = 0
        opts = {{name = 0x555555573649 "addr", has_arg = 1, flag = 0x0, val = 97}, {name = 0x55555557364e "tls", has_arg = 1, flag = 0x0, val = 116}, {name = 0x555555573652 "fd", has_arg = 1, flag = 0x0, val = 83}, {name = 0x555555573655 "tlsfd", has_arg = 1, flag = 0x0, val = 84}, {name = 0x55555557349b "config", has_arg = 1, flag = 0x0,
            val = 99}, {name = 0x55555557365b "keyfile", has_arg = 1, flag = 0x0, val = 107}, {name = 0x555555573663 "forks", has_arg = 1, flag = 0x0, val = 102}, {name = 0x555555573669 "moduledir", has_arg = 1, flag = 0x0, val = 109}, {name = 0x555555573673 "verbose", has_arg = 0, flag = 0x0, val = 118}, {name = 0x55555557367b "quiet",
            has_arg = 0, flag = 0x0, val = 113}, {name = 0x555555573681 "version", has_arg = 0, flag = 0x0, val = 86}, {name = 0x555555573689 "help", has_arg = 0, flag = 0x0, val = 104}, {name = 0x0, has_arg = 0, flag = 0x0, val = 0}}
        ipc_set = {at = 0x0, len = 0, cap = 0}
        fork_id = 0
        pool = {ctx = 0x55555579b230, alloc = 0x7ffff7b74082 <mp_alloc>, free = 0x0}
        engine = {resolver = {options = 0, opt_rr = 0x55555579b2a0, trust_anchors = {root = 0x0, malloc = 0x7ffff7b54618 <malloc_std>, free = 0x7ffff7b54636 <free_std>, baton = 0x0}, negative_anchors = {root = 0x0, malloc = 0x7ffff7b54618 <malloc_std>, free = 0x7ffff7b54636 <free_std>, baton = 0x0}, root_hints = {name = 0x55555579b2d8 "",
              key = 0x0, trust_anchor = 0x0, parent = 0x0, nsset = {root = 0x55555579b3a1, malloc = 0x7ffff7b6cdd0 <mm_alloc>, free = 0x7ffff7b6ce12 <mm_free>, baton = 0x7fffffffe790}, pool = 0x7fffffffe790}, cache = {db = 0x55555579db40, api = 0x7ffff7d97080 <api>, stats = {hit = 0, miss = 0, insert = 0, delete = 0}, ttl_min = 0,
              ttl_max = 518400}, cache_rtt = 0x7ffff43a0010, cache_rep = 0x7ffff7f5f010, modules = 0x7fffffffe720, cookie_ctx = {clnt = {enabled = false, current = {secr = 0x0, alg_id = 0}, recent = {secr = 0x0, alg_id = 0}}, srvr = {enabled = false, current = {secr = 0x0, alg_id = 0}, recent = {secr = 0x0, alg_id = 0}}},
            cache_cookie = 0x7ffff419f010, tls_padding = -1, pool = 0x7fffffffe790}, net = {loop = 0x7ffff74ac900, endpoints = {root = 0x55555577e921, malloc = 0x7ffff7b54618 <malloc_std>, free = 0x7ffff7b54636 <free_std>, baton = 0x0}, tls_credentials = 0x0}, modules = {at = 0x55555579c2a0, len = 5, cap = 5}, backends = {at = 0x55555579c3c0,
            len = 1, cap = 5}, ipc_set = {at = 0x0, len = 0, cap = 0}, pool = 0x7fffffffe790, updater = 0x55555579ee00, hostname = 0x0, L = 0x40000378, moduledir = 0x55555579d390 "/usr/local/lib/kdns_modules"}
        worker = 0x7ffff7f1e010
        loop = 0x7ffff74ac900
        sigint = {data = 0x0, loop = 0x7ffff74ac900, type = UV_SIGNAL, close_cb = 0x0, handle_queue = {0x7fffffffe490, 0x7ffff74ac9d0}, u = {fd = 0, reserved = {0x0, 0x0, 0x0, 0x0}}, next_closing = 0x0, flags = 24576, signal_cb = 0x55555556a9d9 <signal_handler>, signum = 2, tree_entry = {rbe_left = 0x0, rbe_right = 0x7fffffffe470,
            rbe_parent = 0x0, rbe_color = 0}, caught_signals = 0, dispatched_signals = 0}
        sigterm = {data = 0x0, loop = 0x7ffff74ac900, type = UV_SIGNAL, close_cb = 0x0, handle_queue = {0x55555579d5a0, 0x7fffffffe530}, u = {fd = 0, reserved = {0x0, 0x0, 0x0, 0x0}}, next_closing = 0x0, flags = 24576, signal_cb = 0x55555556a9d9 <signal_handler>, signum = 15, tree_entry = {rbe_left = 0x0, rbe_right = 0x0,
            rbe_parent = 0x7fffffffe510, rbe_color = 1}, caught_signals = 0, dispatched_signals = 0}

Valgrind:

Invalid read of size 1
   at 0x4C2FE23: strcmp (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
   by 0xF6726B4: stats_get (stats.c:282)
   by 0x1127DE: l_trampoline (engine.c:529)
   by 0x61D2B26: ??? (in /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2.1.0)
   by 0x621F08C: lua_pcall (in /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2.1.0)
   by 0x11334D: engine_pcall (engine.c:721)
   by 0x1133C5: engine_cmd (engine.c:736)
   by 0x11E2DE: tty_process_input (main.c:122)
   by 0x57777DC: ??? (in /usr/lib/x86_64-linux-gnu/libuv.so.1.0.0)
   by 0x5777F23: ??? (in /usr/lib/x86_64-linux-gnu/libuv.so.1.0.0)
   by 0x577CEF7: ??? (in /usr/lib/x86_64-linux-gnu/libuv.so.1.0.0)
   by 0x576E933: uv_run (in /usr/lib/x86_64-linux-gnu/libuv.so.1.0.0)
 Address 0x0 is not stack'd, malloc'd or (recently) free'd


Process terminating with default action of signal 11 (SIGSEGV)
 Access not within mapped region at address 0x0
   at 0x4C2FE23: strcmp (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
   by 0xF6726B4: stats_get (stats.c:282)
   by 0x1127DE: l_trampoline (engine.c:529)
   by 0x61D2B26: ??? (in /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2.1.0)
   by 0x621F08C: lua_pcall (in /usr/lib/x86_64-linux-gnu/libluajit-5.1.so.2.1.0)
   by 0x11334D: engine_pcall (engine.c:721)
   by 0x1133C5: engine_cmd (engine.c:736)
   by 0x11E2DE: tty_process_input (main.c:122)
   by 0x57777DC: ??? (in /usr/lib/x86_64-linux-gnu/libuv.so.1.0.0)
   by 0x5777F23: ??? (in /usr/lib/x86_64-linux-gnu/libuv.so.1.0.0)
   by 0x577CEF7: ??? (in /usr/lib/x86_64-linux-gnu/libuv.so.1.0.0)
   by 0x576E933: uv_run (in /usr/lib/x86_64-linux-gnu/libuv.so.1.0.0)

Feature request: try upstream servers in sequence / as fallback

Currently policy.STUB and policy.FORWARD selects the fastest server in server list. Can there be an option to let it try in config sequence? (It is better if timeout can be set in config)

I wish there is some mechanism if a policy (even non-chained one) fails, the request can continue to the next policy. But this seems to be much difficult to implement.

==========
Origin issue:

Is servers specified in FORWARD policy tried randomly or in sequence?

I want to set some servers only for backup (since they are not replying very good results). Is this possible with FORWARD policy?

build broken in master

modules/redis/redis.c: In function 'namedb_redis_mkopts':
modules/redis/redis.c:31:2: error: 'auto_free' undeclared (first use in this function)
  auto_free char *conf = strdup(conf_);
  ^
modules/redis/redis.c:31:2: note: each undeclared identifier is reported only once for each function it appears in
modules/redis/redis.c:31:12: error: expected ';' before 'char'
  auto_free char *conf = strdup(conf_);
            ^
modules/redis/redis.c:33:15: error: 'conf' undeclared (first use in this function)
  if (!cli || !conf) {
               ^
modules/redis/redis.mk:4: recipe for target 'modules/redis/redis.o' failed
make: *** [modules/redis/redis.o] Error 1
make: *** Waiting for unfinished jobs....
modules/kmemcached/namedb_memcached.c: In function 'init':
modules/kmemcached/namedb_memcached.c:47:2: error: 'auto_free' undeclared (first use in this function)
  auto_free char *config_str = kr_strcatdup(2, arg, " --BINARY-PROTOCOL");
  ^
modules/kmemcached/namedb_memcached.c:47:2: note: each undeclared identifier is reported only once for each function it appears in
modules/kmemcached/namedb_memcached.c:47:12: error: expected ';' before 'char'
  auto_free char *config_str = kr_strcatdup(2, arg, " --BINARY-PROTOCOL");
            ^
modules/kmemcached/namedb_memcached.c:48:35: error: 'config_str' undeclared (first use in this function)
  memcached_st *handle = memcached(config_str, strlen(config_str));
                                   ^
modules/kmemcached/kmemcached.mk:4: recipe for target 'modules/kmemcached/namedb_memcached.o' failed
make: *** [modules/kmemcached/namedb_memcached.o] Error 1

cache prefers parent-side TTL to authoritative

Reported on ML.

Knot Resolver seems to cache the TTL of parent zone (= delegation) instead
of the TTL which is in the zone itself.

dig +noall +auth @n.ns.at univie.ac.at ns 
univie.ac.at.		10800	IN	NS	ns3.univie.ac.at.
univie.ac.at.		10800	IN	NS	ns4.univie.ac.at.
univie.ac.at.		10800	IN	NS	ns5.univie.ac.at.
univie.ac.at.		10800	IN	NS	ns7.univie.ac.at.
univie.ac.at.		10800	IN	NS	ns8.univie.ac.at.
univie.ac.at.		10800	IN	NS	ns10.univie.ac.at.
dig +noall +ans @ns10.univie.ac.at univie.ac.at ns
univie.ac.at.		600	IN	NS	ns7.univie.ac.at.
univie.ac.at.		600	IN	NS	ns4.univie.ac.at.
univie.ac.at.		600	IN	NS	ns8.univie.ac.at.
univie.ac.at.		600	IN	NS	ns3.univie.ac.at.
univie.ac.at.		600	IN	NS	ns5.univie.ac.at.
univie.ac.at.		600	IN	NS	ns10.univie.ac.at.
dig +noall +ans @ns10.univie.ac.at ns10.univie.ac.at a
ns10.univie.ac.at.	600	IN	A	192.76.243.2

Knot Resolver is caching 10800 instead of 600:

dig +noall +answer @127.0.0.1 ns10.univie.ac.at a
ns10.univie.ac.at.	10634	IN	A	192.76.243.2

Bind, unbound and pdns-recursor cache authoritative TTL (600).

daemon: implement cosockets in Lua

Idea

This would allow modules in Lua land to start socket servers and pass on data without blocking the main thread. It should be implemented like a wrapper over LuaSocket API and call yield with a file descriptor, which would then return when that file descriptor becomes active.

slow on first resolve.

hello,
On ubuntu 14.04 I installed as so:

sudo add-apt-repository ppa:cz.nic-labs/knot-dns
sudo apt-get update
sudo apt-get install knot-resolver

dig @127.0.0.1 slashdot.org times out the first time I run it and the following shows up in the console running kresd. most of my queries have the same problem, over 1 second to resolve the first time. once it's in cache, it is very fast.

# kresd -v
[system] interactive mode
> [plan] plan 'slashdot.org.' type 'A'
[resl]   => querying: '2001:503:ba3e::2:30' score: 10 zone cut: '.' m12n: 'oRG.' type: 'NS'
[resl]      optional: '198.41.0.4' score: 10 zone cut: '.' m12n: 'oRG.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:503:ba3e::2:30' score: 10 zone cut: '.' m12n: 'ORg.' type: 'NS'
[resl]      optional: '198.41.0.4' score: 10 zone cut: '.' m12n: 'ORg.' type: 'NS'
[plan] plan 'slashdot.org.' type 'A'
[resl]   => querying: '2001:500:2::c' score: 10 zone cut: '.' m12n: 'OrG.' type: 'NS'
[resl]      optional: '192.33.4.12' score: 10 zone cut: '.' m12n: 'OrG.' type: 'NS'
[resl]   => querying: '2001:500:1::803f:235' score: 10 zone cut: '.' m12n: 'ORg.' type: 'NS'
[resl]      optional: '128.63.2.53' score: 10 zone cut: '.' m12n: 'ORg.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:2::c' score: 10 zone cut: '.' m12n: 'oRG.' type: 'NS'
[resl]      optional: '192.33.4.12' score: 10 zone cut: '.' m12n: 'oRG.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:1::803f:235' score: 10 zone cut: '.' m12n: 'ORG.' type: 'NS'
[resl]      optional: '128.63.2.53' score: 10 zone cut: '.' m12n: 'ORG.' type: 'NS'
[plan] plan 'slashdot.org.' type 'A'
[resl]   => querying: '198.41.0.4' score: 10 zone cut: '.' m12n: 'OrG.' type: 'NS'
[iter]   <= referral response, follow
[resl]   => querying: '2001:500:e::1' score: 10 zone cut: 'org.' m12n: 'sLaSHdot.Org.' type: 'A'
[resl]      optional: '199.19.56.1' score: 10 zone cut: 'org.' m12n: 'sLaSHdot.Org.' type: 'A'
[resl]   => querying: '2001:500:2d::d' score: 10 zone cut: '.' m12n: 'Org.' type: 'NS'
[resl]      optional: '199.7.91.13' score: 10 zone cut: '.' m12n: 'Org.' type: 'NS'
[resl]   => querying: '2001:500:84::b' score: 10 zone cut: '.' m12n: 'orG.' type: 'NS'
[resl]      optional: '192.228.79.201' score: 10 zone cut: '.' m12n: 'orG.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:e::1' score: 10 zone cut: 'org.' m12n: 'slaSHdOT.oRg.' type: 'A'
[resl]      optional: '199.19.56.1' score: 10 zone cut: 'org.' m12n: 'slaSHdOT.oRg.' type: 'A'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:2d::d' score: 10 zone cut: '.' m12n: 'ORG.' type: 'NS'
[resl]      optional: '199.7.91.13' score: 10 zone cut: '.' m12n: 'ORG.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:84::b' score: 10 zone cut: '.' m12n: 'OrG.' type: 'NS'
[resl]      optional: '192.228.79.201' score: 10 zone cut: '.' m12n: 'OrG.' type: 'NS'
[resl]   => querying: '2001:500:c::1' score: 10 zone cut: 'org.' m12n: 'SLasHDot.ORg.' type: 'A'
[resl]      optional: '199.19.54.1' score: 10 zone cut: 'org.' m12n: 'SLasHDot.ORg.' type: 'A'
[resl]   => querying: '2001:500:2f::f' score: 10 zone cut: '.' m12n: 'org.' type: 'NS'
[resl]      optional: '192.5.5.241' score: 10 zone cut: '.' m12n: 'org.' type: 'NS'
[resl]   => querying: '2001:7fd::1' score: 10 zone cut: '.' m12n: 'OrG.' type: 'NS'
[resl]      optional: '193.0.14.129' score: 10 zone cut: '.' m12n: 'OrG.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:c::1' score: 10 zone cut: 'org.' m12n: 'slASHdOt.ORG.' type: 'A'
[resl]      optional: '199.19.54.1' score: 10 zone cut: 'org.' m12n: 'slASHdOt.ORG.' type: 'A'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:2f::f' score: 10 zone cut: '.' m12n: 'ORG.' type: 'NS'
[resl]      optional: '192.5.5.241' score: 10 zone cut: '.' m12n: 'ORG.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:7fd::1' score: 10 zone cut: '.' m12n: 'orG.' type: 'NS'
[resl]      optional: '193.0.14.129' score: 10 zone cut: '.' m12n: 'orG.' type: 'NS'
[resl]   => querying: '199.19.56.1' score: 10 zone cut: 'org.' m12n: 'SlaShDOt.Org.' type: 'A'
[iter]   <= referral response, follow
[plan]   plan 'ns1.p03.dynect.net.' type 'AAAA'
[resl]     => querying: '2001:503:a83e::2:30' score: 10 zone cut: 'net.' m12n: 'dYnEcT.nEt.' type: 'NS'
[resl]        optional: '192.5.6.30' score: 10 zone cut: 'net.' m12n: 'dYnEcT.nEt.' type: 'NS'
[resl]   => querying: '192.112.36.4' score: 10 zone cut: '.' m12n: 'oRG.' type: 'NS'
[iter]   <= referral response, follow
[resl]   => querying: '2001:500:40::1' score: 10 zone cut: 'org.' m12n: 'SlASHdot.oRg.' type: 'A'
[resl]      optional: '199.249.112.1' score: 10 zone cut: 'org.' m12n: 'SlASHdot.oRg.' type: 'A'
[resl]   => querying: '192.228.79.201' score: 10 zone cut: '.' m12n: 'orG.' type: 'NS'
[iter]   <= referral response, follow
[resl]   => querying: '2001:500:40::1' score: 10 zone cut: 'org.' m12n: 'SLASHDot.oRG.' type: 'A'
[resl]      optional: '199.249.112.1' score: 10 zone cut: 'org.' m12n: 'SLASHDot.oRG.' type: 'A'
[resl]     => NS unreachable, retrying over TCP
[resl]     => querying: '2001:503:a83e::2:30' score: 10 zone cut: 'net.' m12n: 'dYnEcT.nET.' type: 'NS'
[resl]        optional: '192.5.6.30' score: 10 zone cut: 'net.' m12n: 'dYnEcT.nET.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:40::1' score: 10 zone cut: 'org.' m12n: 'SLaSHdOT.Org.' type: 'A'
[resl]      optional: '199.249.112.1' score: 10 zone cut: 'org.' m12n: 'SLaSHdOT.Org.' type: 'A'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:40::1' score: 10 zone cut: 'org.' m12n: 'SLashDoT.orG.' type: 'A'
[resl]      optional: '199.249.112.1' score: 10 zone cut: 'org.' m12n: 'SLashDoT.orG.' type: 'A'
[resl]     => querying: '192.26.92.30' score: 10 zone cut: 'net.' m12n: 'dYNECt.Net.' type: 'NS'
[iter]     <= referral response, follow
[resl]     => querying: '2001:500:90::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.dYnect.NEt.' type: 'NS'
[resl]   => querying: '199.249.112.1' score: 10 zone cut: 'org.' m12n: 'slasHDOt.ORg.' type: 'A'
[iter]   <= referral response, follow
[plan]   plan 'ns1.p03.dynect.net.' type 'AAAA'
[resl]     => querying: '2001:500:90::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DynECt.NEt.' type: 'NS'
[resl]        optional: '208.78.70.100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DynECt.NEt.' type: 'NS'
[resl]   => querying: '199.19.54.1' score: 10 zone cut: 'org.' m12n: 'SLAshdoT.ORG.' type: 'A'
[iter]   <= referral response, follow
[plan]   plan 'ns1.p03.dynect.net.' type 'AAAA'
[resl]     => querying: '2001:500:90::100' score: 10 zone cut: 'dynect.net.' m12n: 'P03.dYnEct.NEt.' type: 'NS'
[resl]        optional: '208.78.70.100' score: 10 zone cut: 'dynect.net.' m12n: 'P03.dYnEct.NEt.' type: 'NS'
[resl]     => NS unreachable, retrying over TCP
[resl]     => querying: '2001:500:90::100' score: 10 zone cut: 'dynect.net.' m12n: 'P03.DYneCt.NEt.' type: 'NS'
[resl]     => NS unreachable, retrying over TCP
[resl]     => querying: '2001:500:90::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.dYNEct.NEt.' type: 'NS'
[resl]        optional: '208.78.70.100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.dYNEct.NEt.' type: 'NS'
[resl]     => NS unreachable, retrying over TCP
[resl]     => querying: '2001:500:90::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DYnect.nET.' type: 'NS'
[resl]        optional: '208.78.70.100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DYnect.nET.' type: 'NS'
[resl]     => querying: '208.78.70.100' score: 10 zone cut: 'dynect.net.' m12n: 'P03.DYNEct.NEt.' type: 'NS'
[iter]     <= rcode: NOERROR
[iter]     <= found cut, retrying with non-minimized name
[resl]     => querying: '204.13.250.100' score: 10 zone cut: 'dynect.net.' m12n: 'nS1.P03.dYNect.NEt.' type: 'AAAA'
[iter]     <= rcode: NOERROR
[resl]   => querying: '2001:500:90:1::3' score: 10 zone cut: 'slashdot.org.' m12n: 'sLasHdot.org.' type: 'A'
[resl]     => querying: '2001:500:94::100' score: 10 zone cut: 'dynect.net.' m12n: 'P03.DynECt.net.' type: 'NS'
[resl]        optional: '208.78.71.100' score: 10 zone cut: 'dynect.net.' m12n: 'P03.DynECt.net.' type: 'NS'
[resl]     => querying: '2001:500:94::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DYnect.nET.' type: 'NS'
[resl]        optional: '208.78.71.100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DYnect.nET.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:90:1::3' score: 10 zone cut: 'slashdot.org.' m12n: 'slasHDOT.ORG.' type: 'A'
[resl]     => NS unreachable, retrying over TCP
[resl]     => querying: '2001:500:94::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.dYNECT.nET.' type: 'NS'
[resl]        optional: '208.78.71.100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.dYNECT.nET.' type: 'NS'
[resl]     => NS unreachable, retrying over TCP
[resl]     => querying: '2001:500:94::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DYNeCt.net.' type: 'NS'
[resl]        optional: '208.78.71.100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DYNeCt.net.' type: 'NS'
[plan]   plan 'ns1.p03.dynect.net.' type 'A'
[resl]     => querying: '2001:500:94::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.dyneCT.NET.' type: 'NS'
[resl]        optional: '208.78.71.100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.dyneCT.NET.' type: 'NS'
[resl]     => querying: '208.78.71.100' score: 10 zone cut: 'dynect.net.' m12n: 'P03.DynecT.neT.' type: 'NS'
[iter]     <= rcode: NOERROR
[iter]     <= found cut, retrying with non-minimized name
[resl]     => querying: '204.13.251.100' score: 10 zone cut: 'dynect.net.' m12n: 'Ns1.p03.dyneCt.NeT.' type: 'AAAA'
[iter]     <= rcode: NOERROR
[resl]   => querying: '2001:500:90:1::3' score: 2850 zone cut: 'slashdot.org.' m12n: 'slaSHDoT.org.' type: 'A'
[resl]     => querying: '204.13.250.100' score: 11 zone cut: 'dynect.net.' m12n: 'P03.DYneCt.Net.' type: 'NS'
[iter]     <= rcode: NOERROR
[iter]     <= found cut, retrying with non-minimized name
[resl]     => querying: '208.78.71.100' score: 12 zone cut: 'dynect.net.' m12n: 'nS1.p03.DynecT.nET.' type: 'AAAA'
[iter]     <= rcode: NOERROR
[resl]   => querying: '2001:500:90:1::3' score: 2850 zone cut: 'slashdot.org.' m12n: 'SlAShdot.oRG.' type: 'A'
[resl]     => NS unreachable, retrying over TCP
[resl]     => querying: '2001:500:94::100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DYNECT.net.' type: 'NS'
[resl]        optional: '208.78.71.100' score: 10 zone cut: 'dynect.net.' m12n: 'p03.DYNECT.net.' type: 'NS'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:90:1::3' score: 2850 zone cut: 'slashdot.org.' m12n: 'SlaSHdOT.OrG.' type: 'A'
[resl]   => NS unreachable, retrying over TCP
[resl]   => querying: '2001:500:90:1::3' score: 2850 zone cut: 'slashdot.org.' m12n: 'SLashdOT.org.' type: 'A'
[resl]     => querying: '204.13.250.100' score: 11 zone cut: 'dynect.net.' m12n: 'P03.DYnECt.NEt.' type: 'NS'
[iter]     <= rcode: NOERROR
[iter]     <= found cut, retrying with non-minimized name
[resl]     => querying: '204.13.250.100' score: 11 zone cut: 'dynect.net.' m12n: 'Ns1.P03.DyNEcT.net.' type: 'A'
[iter]     <= rcode: NOERROR
[resl]   => querying: '208.78.70.3' score: 10 zone cut: 'slashdot.org.' m12n: 'SlAshdOT.org.' type: 'A'
[iter]   <= rcode: NOERROR
[resl] finished: 4, queries: 3, mempool: 32800 B
[plan]   plan 'ns1.p03.dynect.net.' type 'A'
[ rc ]     => satisfied from cache
[iter]     <= rcode: NOERROR
[ rc ]   => satisfied from cache
[iter]   <= rcode: NOERROR
[resl] finished: 4, queries: 3, mempool: 32800 B
[plan]   plan 'ns1.p03.dynect.net.' type 'A'
[ rc ]     => satisfied from cache
[iter]     <= rcode: NOERROR
[ rc ]   => satisfied from cache
[iter]   <= rcode: NOERROR
[resl] finished: 4, queries: 3, mempool: 32800 B

second time run, very fast:

dig @127.0.0.1 slashdot.org

; <<>> DiG 9.9.5-3ubuntu0.5-Ubuntu <<>> @127.0.0.1 slashdot.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2980
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;slashdot.org.                  IN      A

;; ANSWER SECTION:
slashdot.org.           20      IN      A       216.34.181.45

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Tue Nov 03 00:20:10 PST 2015
;; MSG SIZE  rcvd: 57

any ideas what the prob is?

build error from souce

command: meson ./build_dir
and the output is:

[root@bogon knot-resolver]# meson ./build_dir
The Meson build system
Version: 0.63.0
Source dir: /home/xushipei/knot-resolver
Build dir: /home/xushipei/knot-resolver/build_dir
Build type: native build
Project name: knot-resolver
Project version: 5.5.1
C compiler for the host machine: cc (gcc 12.1.1 "cc (GCC) 12.1.1 20220507 (Red Hat 12.1.1-1)")
C linker for the host machine: cc ld.bfd 2.37-27
C++ compiler for the host machine: c++ (gcc 12.1.1 "c++ (GCC) 12.1.1 20220507 (Red Hat 12.1.1-1)")
C++ linker for the host machine: c++ ld.bfd 2.37-27
Host machine cpu family: x86_64
Host machine cpu: x86_64
Message: --- required dependencies ---
Found pkg-config: /usr/bin/pkg-config (1.8.0)
Run-time dependency libknot found: YES 3.2.dev.1657363084.bd23bc5bb
Run-time dependency libdnssec found: YES 3.2.dev.1657363084.bd23bc5bb
Run-time dependency libzscanner found: YES 3.2.dev.1657363084.bd23bc5bb
Run-time dependency libuv found: YES 1.44.1
Run-time dependency lmdb found: YES 0.9.29
Run-time dependency gnutls found: YES 3.7.6
Run-time dependency luajit found: YES 2.1.0-beta3
Message: ------------------------------
Message: --- optional dependencies ---
Run-time dependency libnghttp2 found: YES 1.46.0
Run-time dependency openssl found: YES 3.0.2
Checking for function "asprintf" : YES
Run-time dependency libcap-ng found: YES 0.8.2
Checking for function "sendmmsg" : YES
Has header "libknot/xdp/xdp.h" : YES
Run-time dependency libsystemd found: YES 250
Message: ---------------------------
Configuring kresconfig.h using configuration
Configuring trust_anchors.lua using configuration
Configuring sandbox.lua using configuration
Configuring distro-preconfig.lua using configuration
Program ./kres-gen.sh found: YES (/home/xushipei/knot-resolver/daemon/lua/./kres-gen.sh)
Checking for size of "time_t" : 8
Checking for size of "struct timeval" : 16
Checking for size of "zs_scanner_t" with dependency libzscanner: 206144
Checking for size of "knot_pkt_t" with dependency libknot: 464
Program luajit found: YES (/usr/bin/luajit)

daemon/lua/meson.build:92:4: ERROR: Problem encountered: if you use released Knot* versions, please contact us: https://www.knot-resolver.cz/contact/
/usr/bin/luajit: (command line):20: Lua binding for C type knot_pkt_t has incorrect size: 208
stack traceback:
[C]: in function 'assert'
(command line):20: in main chunk
[C]: at 0x5634cecfbaa0

lib: implement subset of getdns API

Problem

I'd like the library to be reuseable by clients. getdns is a way to do that.

Expected outcome

  • Download the latest header
  • Implement simple query resolution
  • Implement basic options (TCP, EDNS...)
  • Implement unit tests (some possibly exist in other implementations!)

etcd module for remote configuration interface

Since the configuration has syntactic sugar to hide function calls to key-value sets, it's possible to treat it with a key-value paradigm. On top of that, it has expressions which evaluate on host. This makes possible centralized configuration possible with projects such as etcd.

Expected outcome

  • Play with etcd for a while
  • Write a module using etcd api (Lua/C)
  • Test configuration of hints, cache and network
  • Write a documentation page on this

"pullrequest" fix pkgconfig install :)

diff --git a/lib/lib.mk b/lib/lib.mk
index 822af51..7d89590 100644
--- a/lib/lib.mk
+++ b/lib/lib.mk
@@ -64,7 +64,8 @@ libkres.pc:
        @echo 'Libs: -L$${libdir} -lkres' >> $@
        @echo 'Cflags: -I$${includedir}' >> $@
 libkres-pcinstall: libkres.pc libkres-install
-       $(INSTALL) -m 644 $< $(DESTDIR)$(LIBDIR)/pkgconfig
+       $(INSTALL) -d -m 755 $(DESTDIR)$(LIBDIR)/pkgconfig/
+       $(INSTALL)    -m 644 $< $(DESTDIR)$(LIBDIR)/pkgconfig/

 # Targets
 lib: $(libkres)

@alesmrazek github GPG keys

@alesmrazek it appears as though you have changed GPG keys (to 3057EE9A448F362D74205A779AB120DA0A76F6DE) but have not updated your github profile (or easily made that key available).

This results in the github UI looking like e.g.
image

image

Please:

  • Make this new key available somewhere (e.g. push to keyservers)
  • Confirm that this new key is you (using your old key to sign new one is best method)
  • Upload your new key to github so that the UI is fixed

daemon: support experimental DNS-over-HTTPS

Google DNS announced DNS-over-HTTPS. This is a great step towards a really new DNS API for application developers that doesn't force DNS consumers to understand how DNS works.

Problem

Given this, a client would require only a thin library to figure out which resolver to ask (instead of hardcoding user resolver) and payload could be delivered directly to client either understanding JSON (typically web browsers) or unpacking the JSON response to native structure in given library, and not cramming custom data types / event loops / whatever in the DNS library.

Deliverables

  • Lua module that starts an HTTPS server and listens for requests
  • Translate the requests to internal DNS resolution lookups (already API for this)
  • Convert DNS raw answer to schema in JSON
  • Write a user-space library and tooling around it

Mock API library

struct dns_resolve {};
/*@ Select next address for query (if clients does DNS/HTTPS itself). */
char *dns_select_server(void);
/*@ Open DNS context for resolution. */
int dns_open(struct dns_resolve *context);
/*@ Submit a query, return file descriptor that the application can poll on. */
int dns_submit(struct dns_resolve *context, [query ...]);
/*@ Receive a DNS response object. */
<obj> dns_recv(struct dns_resolve *context);

memcached backend for the cache

Problem

The resolver now has LMDB backend (and in-memory trie in theory). This is good enough for most deployments and has local replication. Some anycasts might want to do distributed cache and remote replication.

Outcome

  • Document namedb API (libknot)
  • Implement memcached backend for namedb API

daemon: improve tty interface

Problem

The CLI interface is based on reading stdin lines, not a TTY, so it doesn't support text cursor, tab completion or multiline commands.

Expected outcome

Basically it should behave like a Lua interpreter, but integrated in libuv loop.
There is already TTY code in libuv, and Lua interpreters around so we might want to reuse something.

  • Real TTY, support for arrows
  • Basic introspection
  • Tab completion

Detection of libuv does not check the version number

If libuv 0 is installed, "make info" does not complain but compilation fails:

  CC    daemon/io.c
daemon/io.c: In function ‘udp_bind’:
daemon/io.c:71:19: error: ‘UV_UDP_REUSEADDR’ undeclared (first use in this function)
  unsigned flags = UV_UDP_REUSEADDR;
                   ^
daemon/io.c:71:19: note: each undeclared identifier is reported only once for each function it appears in
...

Once libuv 0 is replaced by 1, it works so "make info" should test that.

Clearing the cache at startup

Adding cache.clear() to the beginning or end of the config file doesn't seem to clear the cache, while entering it at the command line does. I have a use case where I want the cache cleared each time the daemon is started.

Strange behaviour with multiple FORWARD policy and CNAME across them

Hi,
I'm trying to make something similar to unbound's cfg:

forward-zone:
	name: "zone-a"
	forward-addr: 192.168.1.4

forward-zone:
	name: "zone-b"
	forward-addr: 192.168.1.5

So I set this to kresd:

modules = {'policy'}

policy:add(policy.suffix(policy.FORWARD('192.168.1.4'), {todname('zone-a')}))
policy:add(policy.suffix(policy.FORWARD('192.168.1.5'), {todname('zone-b')}))

Expected behaviour (as made by unbound):

root@unbound:~# dig @::1 A a.zone-a
...
;; ANSWER SECTION:
a.zone-a.		300	IN	CNAME	b.zone-b.
b.zone-b.		300	IN	A	192.168.1.1
...

But Knot resolver does it ... strange :-/ :

root@kresd:/# dig @::1 A a.zone-a
...
;; connection timed out; no servers could be reached

with 'verbose(true)' I got:

root@kresd:/etc/init.d# /usr/sbin/kresd --addr=127.0.0.1#53 --addr=::1#53 --config=/etc/knot-resolver/kresd.conf --verbose /run/knot-resolver/cache
[system] interactive mode
> verbose(true)
true
> [plan] plan 'a.zone-a.' type 'A'
[resl]   => querying: '192.168.1.4' score: 1425 zone cut: '.' m12n: 'a.zone-A.' type: 'A' proto: 'udp'
[iter]   <= rcode: NOERROR
[iter]   <= cname chain, following
[plan] plan 'b.zone-b.' type 'A'
[resl]   <= server: '192.168.1.4' rtt: 1 ms
[resl]   => using root hints
[resl]   => querying: '2001:dc3::35' score: 10 zone cut: '.' m12n: 'zONe-B.' type: 'NS' proto: 'udp'
[resl]   => querying: '202.12.27.33' score: 10 zone cut: '.' m12n: 'zONe-B.' type: 'NS' proto: 'udp'
[resl]   => querying: '2001:500:9f::42' score: 10 zone cut: '.' m12n: 'zONe-B.' type: 'NS' proto: 'udp'
[resl]   => querying: '199.7.83.42' score: 10 zone cut: '.' m12n: 'zONe-B.' type: 'NS' proto: 'udp'
[wrkr]   => server: '2001:dc3::35' flagged as 'bad'
[wrkr]   => server: '202.12.27.33' flagged as 'bad'
[wrkr]   => server: '2001:500:9f::42' flagged as 'bad'
[wrkr]   => server: '199.7.83.42' flagged as 'bad'
[resl]   => querying: '2001:7fd::1' score: 10 zone cut: '.' m12n: 'zonE-B.' type: 'NS' proto: 'udp'
[resl]   => querying: '193.0.14.129' score: 10 zone cut: '.' m12n: 'zonE-B.' type: 'NS' proto: 'udp'
[resl]   => querying: '2001:503:c27::2:30' score: 10 zone cut: '.' m12n: 'zonE-B.' type: 'NS' proto: 'udp'
[resl]   => querying: '192.58.128.30' score: 10 zone cut: '.' m12n: 'zonE-B.' type: 'NS' proto: 'udp'

Am I doing something wrong which causes kresd forget forwarding zone 'zone-b' or what?
(All tests were made in network isolated from Internet)

Simple query which doesn't go across zones works:

root@kresd:/# dig @::1 A b.zone-b
...
;; ANSWER SECTION:
b.zone-b.		300	IN	A	192.168.1.1
...
root@kresd:/# dig @::1 CNAME a.zone-a
...
;; ANSWER SECTION:
a.zone-a.		300	IN	CNAME	b.zone-b.
...
root@kresd:/# dig @::1 A direct-a.zone-a
...
;; ANSWER SECTION:
direct-a.zone-a.	300	IN	CNAME	direct.zone-a.
direct.zone-a.		300	IN	A	2.2.2.2
...
root@kresd:/# dig @::1 A direct-b.zone-b
...
;; ANSWER SECTION:
direct-b.zone-b.	300	IN	CNAME	direct.zone-b.
direct.zone-b.		300	IN	A	3.3.3.3
...

(Jestli je mozny psat cesky, tak bych byl rad. Ale nejspis ne, takze se omlouvam za kostrbatost - I'd be glad to write it in czech, but it is probably not allowed so I'm sorry)

stats api not working and kresc returns nil

Hi, I stood up a new kresd server from the debian packages on deb 9. All default API endpoints (stats, metrics, etc) just return a 404. Kresc returns nil. See examples below:
curl

# curl -vk https://127.0.0.1:8053/stats
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8053 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=knot-01
*  start date: Jan 16 22:42:39 2019 GMT
*  expire date: Apr 16 22:42:39 2019 GMT
*  issuer: CN=knot-01
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5618111cea80)
> GET /stats HTTP/1.1
> Host: 127.0.0.1:8053
> User-Agent: curl/7.52.1
> Accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 404 
< 
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact

kresc

# kresc /run/knot-resolver/control@1
Warning! kresc is highly experimental, use at own risk.
Please tell authors what features you expect from client utility.
kresc> modules.load('stats')
true
kresc> stats.get()
nil
kresc> 

Below is my kresd.conf

modules = {
        'policy',
        'hints > iterate',    -- Load /etc/hosts and allow custom root hints
        'stats',              -- Track internal statistics
        'predict',            -- Prefetch expiring/frequent records
        http = {
                host = 'localhost', -- Default: 'localhost'
                port = 8053,        -- Default: 8053
                geoip = '/etc/knot-resolver/GeoLite2-City.mmdb',
                endpoints = {},
        }
}

Error when call request.pop()

When I call request.pop(query) from Lua module, knot show error message: “error: /usr/local/lib/kdns_modules/kres.lua:462: missing declaration for symbol ‘kr_rplan_pop'”.

My code snippet:

local function rewriteTargetAddress(state, req, ip)
printLog(fceStartMessage .. ' "rewriteTargetAddress"...', 'debug')
req = kres.request_t(req)

local query = req:current()

req:pop(query)
req:push(ip, query.type, query.class, query.flags, 0)
printLog(fceEndMessage .. ' "rewriteTargetAddress"...', 'debug')

return state

end

[apparmor] local include support

diff --git a/scripts/kresd.apparmor b/scripts/kresd.apparmor
index 81fa5a1..8ad4c26 100644
--- a/scripts/kresd.apparmor
+++ b/scripts/kresd.apparmor
@@ -26,5 +26,7 @@
   /usr/lib{,64}/kdns_modules/tinyweb/ r,
   /usr/lib{,64}/kdns_modules/tinyweb/* r,
   /var/lib/GeoIP/* r,
+  # Site-specific additions and overrides. See local/README for details.
+  #include <local/usr.bin.kresd>
 }

I just noticed my kresd apparmor profile referenced my dnsdist local file. anyway... having those local includes in the end means user can adapt the profile for local needs while having the main profile under distribution control. not sure if we want to have an install target for the apparmor profile as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.