Giter VIP home page Giter VIP logo

lua-resty-resolver's People

Contributors

adrianmusial avatar essh avatar jkeys089 avatar omonar avatar rjshrjndrn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

lua-resty-resolver's Issues

Stops working after an nginx reload

When nginx is reloaded (with a HUP signal) the timers that perform DNS resolution no longer continue to fire so none of the shared dict entries get updated and things start to fail as TTLs expire.

This seems to be because the timer responsible for updating these is created in one worker (and this is ensured by only doing so when a _master_ entry is successfully added to the shared_key).

However, the problem with this seems to be the following from the lua-nginx-module documentation "But note that, the lua_shared_dict's shm storage will not be cleared through a config reload (via the HUP signal, for example)." As such, since these _master_ entries still exist in the shared dict and the timers don't get recreated in the new workers that are replacing the old ones.

I've managed to work around this problem in some very limited testing with a one line fix in https://github.com/essh/lua-resty-resolver/commit/fde6e43f95901e596b6eb6c2032be41c246a2868. However, I'm not entirely sure if this is the best approach and any feedback regarding this would be appreciated. Primarily, it seems like there is some potential for race conditions during the reload process whilst both the new and old worker processes continue to exist.

Resolver in Kubernetes

Hi lua noobie here,
First of all, thank you for this awesome plugin.
I have headless services(DNS resolution will give all service's ips) in Kubernetes and want to have sticky sessions on that. Unfortunately, this is not possible with Nginx OSS. So while looking through openresty, I've found this plugin.

As of now, I use resolver=local (in openresty) settings in Kubernetes, because it'll work on any Kubernetes cluster the app gets installed. Now the issue is, is there any way not to give static DNS resolver addresses?

for example, in this code

 master_dns = dns_resolver.new_master{
      cache = "dns_cache", upstream_domain = "upstream.ddnsr-demo.com", dns_servers = {"172.20.0.2"}
    }

Is there any way to avoid the 172.20.0.2 ?

Any help is appreciated.

Master resolver dies after expire time when initialized within cache-loader process

I have experienced a "random" dies of master resolver. Namely, the master correctly resolves the IP address when it is initialized, for example,
2020/02/01 13:14:45 [notice] 30880#30880: *2 [lua] master.lua:167: set(): address: XX.XX.XX.XX, ttl: 20, context: ngx.timer
2020/02/01 13:14:45 [notice] 30880#30880: *2 [lua] master.lua:23: schedule(): next lookup in 18s, context: ngx.timer
and then it dies. The master resolver is not executed again.

I believe I managed to identify the conditions when master resolver "dies." My configuration involves proxy cache zone. When master resolver assigns itself to a cache-loader process (pid 30880), it "dies" because cache-loader process is stopped after 1 minute. See the outcome of ps aux below
root 30876 0.0 0.1 62580 1204 ? Ss 13:14 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 30877 0.0 0.4 66960 4312 ? S 13:14 0:00 nginx: worker process
www-data 30878 0.0 0.4 66960 4312 ? S 13:14 0:00 nginx: worker process
www-data 30879 0.0 0.4 66960 4088 ? S 13:14 0:00 nginx: cache manager process
www-data 30880 0.0 0.4 66960 4088 ? S 13:14 0:00 nginx: cache loader process

If understood correctly, the master resolver is assigned to the first worker that is initialized. If this happens to be a cache-loader process, then, well, it is doomed. There are no problems when master resolver is assigned to actual worker process or even cache-manager process because they run continuously.

Got many error "failed to lookup address for DOMAIN: no hosts available while connecting to upstream"

Hello,

I'm having this error message and my request got instantly return with HTTP_CODE 500. This error occur with some request and after I restarted my pod(in kubernetes) that issue is gone. I want to prevent this occur again. How to solve that?

Error message:

[lua] balancer_by_lua:4: failed to lookup address for domain1: no hosts available while connecting to upstream

and here is my configuration

lua_shared_dict domain1_cache 256k;
lua_shared_dict domain2_cache 256k;


init_by_lua_block {
    local err
    local resolver_master = require "resolver.master"

    domain1_master, err = resolver_master:new("domain1_cache", "${domain1_HOST}", {"10.0.0.53"}) # Kubernetes DNS server
    if not domain1_master then
        error("failed to create domain1 resolver master: " .. err)
    end

    domain2_master, err = resolver_master:new("domain2_cache", "${domain2_HOST}", {"10.0.0.53"}) # Kubernetes DNS server
    if not domain2_master then
        error("failed to create domain2 resolver master: " .. err)
    end
}

init_worker_by_lua_block {
    local err

    domain1_master:init()
    domain1_client, err = domain1_master:client()
    if not domain1_client then
        error("failed to create domain1 resolver client: " .. err)
    end

    domain2_master:init()
    domain2_client, err = domain2_master:client()
    if not domain2_client then
        error("failed to create domain2 resolver client: " .. err)
    end
}

upstream domain1 {
    server 0.0.0.1:443;

    balancer_by_lua_block {
        local address, err = domain1_client:get(true)
        if not address then
            ngx.log(ngx.ERR, "failed to lookup address for domain1: ", err)
            return ngx.exit(500)
        end

        local ok, err = require("ngx.balancer").set_current_peer(address, 443)
        if not ok then
            ngx.log(ngx.ERR, "failed to set the current peer for domain1: ", err)
            return ngx.exit(500)
        end
    }

    keepalive 4;
}

upstream domain2 {
    server 0.0.0.1:443;

    balancer_by_lua_block {
        local address, err = domain2_client:get(true)
        if not address then
            ngx.log(ngx.ERR, "failed to lookup address for domain2: ", err)
            return ngx.exit(500)
        end

        local ok, err = require("ngx.balancer").set_current_peer(address, 443)
        if not ok then
            ngx.log(ngx.ERR, "failed to set the current peer for domain2: ", err)
            return ngx.exit(500)
        end
    }

    keepalive 4;
}

Add lib on luarocks

Hello, thank you for this library.

Do you intend to have this library stored on luarocks repository? If so, how can I help you to achieve it?

Adding many domains

Awesome library btw. Most of the other libs require resolves in every location block. I like how this is taken care of in the background.

  1. Is there a recommended practice to add many domain names?
    Do we call this repeatability? cdnjs_master, err = require("resolver.master"):new("dns_cache", 'domain', ...)
  2. what does this mean? (may be shared for multiple domains but for best perf use separate zone per domain) - use a separate lua_shared_dict dns_cache 1m for each domain?

attempt to concatenate field 'address' (a nil value) when using CNAME'd domain

I'm basically copying most of what I found in the synopsis and got this error.

[error] 21#21: *6 lua entry thread aborted: runtime error: /usr/local/openresty/site/lualib/resolver/master.lua:172: attempt to concatenate field 'address' (a nil value)
stack traceback:
coroutine 0:
	/usr/local/openresty/site/lualib/resolver/master.lua: in function 'set'
	/usr/local/openresty/site/lualib/resolver/master.lua:70: in function 'resolve'
	/usr/local/openresty/site/lualib/resolver/master.lua:132: in function </usr/local/openresty/site/lualib/resolver/master.lua:130>, context: ngx.timer

Any ideas?

Question: DNS cache for many upstreams?

The Lua resolver works fine for a single upstream domain but what if there are a couple of them?
Copy & paste the Lua block for each of them seems ugly. Is there an easy way to use the cache for arbitrary hostnames or at least have it scale to a dozend backends?

Strange query pattern

Hi,

I have logged query pattern and I see some "strange" pattern

2018/10/11 19:51:25 [info] 12708#12708: *2 [lua] master.lua:72: resolve(): master sync, context: ngx.timer
2018/10/11 19:51:25 [info] 12708#12708: *2 [lua] master.lua:24: schedule(): master next sync in 15 s, context: ngx.timer
2018/10/11 19:51:40 [info] 12708#12708: *5 [lua] master.lua:72: resolve(): master sync, context: ngx.timer
2018/10/11 19:51:40 [info] 12708#12708: *5 [lua] master.lua:24: schedule(): master next sync in 8 s, context: ngx.timer
2018/10/11 19:51:48 [info] 12708#12708: *7 [lua] master.lua:72: resolve(): master sync, context: ngx.timer
2018/10/11 19:51:48 [info] 12708#12708: *7 [lua] master.lua:24: schedule(): master next sync in 298 s, context: ngx.timer
2018/10/11 19:56:46 [info] 12708#12708: *10 [lua] master.lua:72: resolve(): master sync, context: ngx.timer
2018/10/11 19:56:46 [info] 12708#12708: *10 [lua] master.lua:24: schedule(): master next sync in 8 s, context: ngx.timer
2018/10/11 19:56:54 [info] 12708#12708: *12 [lua] master.lua:72: resolve(): master sync, context: ngx.timer
2018/10/11 19:56:54 [info] 12708#12708: *12 [lua] master.lua:24: schedule(): master next sync in 298 s, context: ngx.timer

It seems that some queries are executed too early, that is, before ttl expires. Since ttl is still positive, the next query is executed again after very short time. Should one actually add timeout in

next_res = next_res - timeout
to postpone the query?

[Feature Request] Healthcheck

I really like your library and I would love to have a healthcheck feature builtin. Many applications need some time until they're fully started and often the DNS entry is updated too early as soon as a new application instance is started.

It would be great to have an option to only add hosts to the DNS cache if an HTTP healthcheck was successful. Would this be possible?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.