polaris-gslb / polaris-gslb Goto Github PK
View Code? Open in Web Editor NEWA free, open source GSLB (Global Server Load Balancing) solution.
License: Other
A free, open source GSLB (Global Server Load Balancing) solution.
License: Other
Several users reported issues that have been attributed to older versions of py3 used, polaris should refuse to start on versions below the minimum required
The current dict iteration method is suboptimal.
Hi,
First of all thanks for a really great project, it seems to work really well.
We noticed that IPv6 is not currently support though, which in 2018 is a really important feature :)
I can see that the python ipaddress library you use has support for IPv6 https://docs.python.org/3.4/library/ipaddress.html
For yaml structure, I thought something like this could work in the member pool section:
members:
- ip: 10.1.1.10
name: www2-dc2
weight: 1
- ip6: 2001:db8::10
name: www2-dc2
weight: 1
Is this something that could be implemented?
Thanks,
Gavin
So that health checks can be ran on a different machine
With the current implementation switching to daemon mode, it's not possible to perform pid file creation or any other check.
Assuming that limited number of connecting clients (recursive resolvers) to be a common case, topology lookup logic can be optimised using a memoization technique, e.g. the results can be cached locally so that subsequent lookups for the same addresses return in O(1).
The existing value of 3 is too restrictive.
GSLB how to work?
how could I set the database or something to make the topology weighted work?
polaris-gslb/polaris_health/core/reactor.py
Line 119 in 7476b9e
memcache.Client() expects a list as it's "servers" argument
if pdns launch with gmysql.
how to make polaris-glsb config.
This can be used to store an arbitrary information about the entity.
When I start the health checker in foreground mode, I get an unicode error. Everything still seems to work, but not sure if that is a bad thing.
I have it running on a CentOS 7 system with python 3.4.3.
$ python3.4 -V
Python 3.4.3
[root@omg-vm-gslb-1 bin]# /opt/polaris/bin/polaris-health -d start
2015-11-11 20:29:07,118 [INFO] polaris_health.core.reactor: starting Polaris health...
2015-11-11 20:29:07,343 [DEBUG] polaris_health.core.reactor: writting /opt/polaris/run/polaris-health.pid
Traceback (most recent call last):
File "/usr/lib64/python3.4/encodings/idna.py", line 165, in encode
raise UnicodeError("label empty or too long")
UnicodeError: label empty or too long
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/polaris/bin/polaris-health", line 177, in <module>
main()
File "/opt/polaris/bin/polaris-health", line 89, in main
start(debug=opts.d)
File "/opt/polaris/bin/polaris-health", line 124, in start
polaris_health.main()
File "/usr/lib/python3.4/site-packages/polaris_gslb-0.3.1-py3.4.egg/polaris_health/__init__.py", line 26, in main
File "/usr/lib/python3.4/site-packages/polaris_gslb-0.3.1-py3.4.egg/polaris_health/core/reactor.py", line 134, in __init__
File "/usr/lib/python3.4/site-packages/polaris_gslb-0.3.1-py3.4.egg/polaris_health/core/reactor.py", line 180, in _heartbeat_loop
File "/usr/lib/python3.4/site-packages/python3_memcached-1.51-py3.4.egg/memcache.py", line 573, in set
File "/usr/lib/python3.4/site-packages/python3_memcached-1.51-py3.4.egg/memcache.py", line 799, in _set
File "/usr/lib/python3.4/site-packages/python3_memcached-1.51-py3.4.egg/memcache.py", line 343, in _get_server
File "/usr/lib/python3.4/site-packages/python3_memcached-1.51-py3.4.egg/memcache.py", line 1113, in connect
File "/usr/lib/python3.4/site-packages/python3_memcached-1.51-py3.4.egg/memcache.py", line 1133, in _get_socket
UnicodeError: encoding with 'idna' codec failed (UnicodeError: label empty or too long)
using the polaris(topology)
Geographical proximity support possible?
Hi Anton, great job with polaris-gslb!
I want to use it for several vhosts and when config file grows it is very difficult to perform a state check.
I'm wondering if you're planning to add an interface for state monitoring or if there's a better way to achieve this.
I have just add a little script to check the member stats, don't now if you need it ..
#!/usr/local/bin/python3.4
# -*- coding: utf-8 -*-
"""Display Polaris health generic state"""
import json
import time
import memcache
mc = memcache.Client(['127.0.0.1'])
responseLine="CRITICAL :"
error=False
val = mc.get('polaris_health:generic_state')
for globalNamesKey, globalNamesValue in enumerate(val["globalnames"]):
#print(globalNamesKey, globalNamesValue)
#print('globalName : ', globalNamesValue, ', poolName : ', val["globalnames"][globalNamesValue]["pool_name"])
poolName=val["globalnames"][globalNamesValue]["pool_name"]
for membersKey, memberID in enumerate(val["pools"][poolName]["members"]):
memberStatus=val["pools"][poolName]["members"][memberID]["status"]
memberName=val["pools"][poolName]["members"][memberID]["name"]
if not memberStatus:
#print(memberName, ' (', memberID, ') is DOWN')
responseLine=responseLine + " member " + memberName + " from pool "+ poolName + " is down ! "
error=True
#else:
#print(memberName, ' (', memberID, ') is UP')
if error:
print(responseLine)
exit(2)
else:
print('OK : all members are up')
I am suspecting that many Public DNS providers are not updating their cache because the serial number of the SOA record is not getting incremented. In fact there is no SOA record. If I dig against polaris itsself, then everything works great, but once I start digging against google DNS, it appears that it keeps the first cached value forever. I cranked the TTL down to 1 which gives the google record a TTL of 0 (No Cache)
Under high usage of polaris-health I am seeing that warning log continually. A Polaris-health restart seems to fix it for awhile and then it starts again.
I am not seeing any other symptom of this error but I am assuming it's bad as it has to do with the state tracker/change code in the tracker submodule.
I have been attempting to debug this by adding logging but nothing useful is getting returned.
I added the following:
# push generic form of the state
# add timestampt to the object
generic_form['timestamp'] = STATE_TIMESTAMP
val = self.sm.set(config.BASE['SHARED_MEM_GENERIC_STATE_KEY'],
generic_form)
if val is True:
pushes_ok += 1
else:
log_msg = ('failed to write generic state to the shared memory')
LOG.warning(log_msg)
LOG.warning(val)
LOG.warning(config.BASE['SHARED_MEM_GENERIC_STATE_KEY'],generic_form)
And the output is this when the issue is happening:
2018-03-16T11:20:49.544714-07:00 2018-03-16 11:20:49,544 [WARNING] polaris_health.tracker: failed to write generic state to the shared memory
2018-03-16T11:20:49.544788-07:00 2018-03-16 11:20:49,544 [WARNING] polaris_health.tracker: 0
2018-03-16T11:20:49.544893-07:00 2018-03-16 11:20:49,544 [WARNING] polaris_health.tracker: polaris_health:generic_state
2018-03-16T11:20:50.191790-07:00 2018-03-16 11:20:50,191 [WARNING] polaris_health.tracker: failed to write generic state to the shared memory
2018-03-16T11:20:50.191881-07:00 2018-03-16 11:20:50,191 [WARNING] polaris_health.tracker: 0
2018-03-16T11:20:50.191985-07:00 2018-03-16 11:20:50,191 [WARNING] polaris_health.tracker: polaris_health:generic_state
2018-03-16T11:20:50.868508-07:00 2018-03-16 11:20:50,868 [WARNING] polaris_health.tracker: failed to write generic state to the shared memory
2018-03-16T11:20:50.868582-07:00 2018-03-16 11:20:50,868 [WARNING] polaris_health.tracker: 0
2018-03-16T11:20:50.868696-07:00 2018-03-16 11:20:50,868 [WARNING] polaris_health.tracker: polaris_health:generic_state
Please let me know how I can troubleshoot this further or why it gets into this state.
Hi,
Setting Polaris up and using it with our current (third-party) DNS system as a delegated or forwarded zone only yields results from the referring DNS server source IP address, not the querying client IP address. If the referring DNS server is in a different subnet than the client, the results are returned from the querying DNS server iP address and not the client IP address. This ends up sending the client to the wrong place because the query is being answered based on the referring DNS server IP address and not the client IP address performing the query.
Is there any way around this?
Regards,
micush
rfc2616. A client MUST include a Host header field in all HTTP/1.1 request messages . If the requested URI does not include an Internet host name for the service being requested, then the Host header field MUST be given with an empty value.
Why does this require 3.4.3+, is 3.4.2 broken?
TIA,
Hi Aaron,
Thanks a lot for quick help for the query - #57
But now I got another issue.
I have setup GSLB with two server and it worked fine when both are up.
Getting one of the IP randomly and hit request on that.
But when I shutdown of the service, it still going to connect with that service and failed.
Output of get-ppdns-state and get-generic-state looks fine to me
{
"timestamp": 1557838693.2270024,
"pools": {
"www-gslbpoc": {
"dist_tables": {
"_default": {
"index": 0,
"rotation": [
"10.247.10.1"
],
"num_unique_addrs": 1
}
},
"max_addrs_returned": 1,
"fallback": "any",
"lb_method": "wrr",
"status": true
}
},
"globalnames": {
"myservice.gslbpoc.com": {
"pool_name": "www-gslbpoc",
"ttl": 1
}
}
}
{
"pools": {
"www-gslbpoc": {
"fallback": "any",
"members": [
{
"weight": 1,
"region": "None",
"status_reason": "monitor passed",
"status": true,
"monitor_ip": "10.247.10.1",
"ip": "10.247.10.1",
"retries_left": 2,
"name": "www1-dc1"
},
{
"weight": 1,
"region": "None",
"status_reason": "timeout timed out during socket.connect()",
"status": false,
"monitor_ip": "10.247.20.1",
"ip": "10.247.20.1",
"retries_left": 0,
"name": "www2-dc2"
}
],
"lb_method": "wrr",
"last_status": true,
"max_addrs_returned": 1,
"monitor": {
"hostname": "myservice.gslbpoc.com",
"timeout": 5,
"interval": 10,
"expected_codes": [
200
],
"url_path": "/gslb/health",
"use_ssl": false,
"port": 8080,
"retries": 2,
"name": "http"
},
"name": "www-gslbpoc"
}
},
"globalnames": {
"myservice.gslbpoc.com": {
"ttl": 1,
"pool_name": "www-gslbpoc",
"name": "myservice.gslbpoc.com"
}
},
"timestamp": 1557838693.0588582
}
[root@rhel7 bin]# curl -v http://myservice.gslbpoc.com:8080/gslb/service
GET /gslb/service HTTP/1.1
User-Agent: curl/7.29.0
Host: myservice.gslbpoc.com:8080
Accept: /
< HTTP/1.1 200
< Content-Type: text/plain;charset=UTF-8
< Content-Length: 20
< Date: Tue, 14 May 2019 13:05:41 GMT
<
[root@rhel7 bin]# curl -v http://myservice.gslbpoc.com:8080/gslb/service
Regards,
Nitin Agarwal
Just installed polaris-gslb on new ubuntu 14.04 / Python 3.4 installed like your wiki, if I try to start (backend or health monitor) I have this
/opt/polaris/bin# python3 polaris-health
Traceback (most recent call last):
File "polaris-health", line 16, in
import polaris_health
File "", line 2214, in _find_and_load
File "", line 2203, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.3.0-py3.4.egg/polaris_health/init.py", line 21, in
File "", line 2214, in _find_and_load
File "", line 2203, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.3.0-py3.4.egg/polaris_health/core/reactor.py", line 14, in
File "", line 2214, in _find_and_load
File "", line 2203, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.3.0-py3.4.egg/polaris_health/core/tracker.py", line 10, in
File "", line 2214, in _find_and_load
File "", line 2203, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.3.0-py3.4.egg/polaris_health/state/init.py", line 7, in
File "", line 2214, in _find_and_load
File "", line 2203, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.3.0-py3.4.egg/polaris_health/state/pool.py", line 7, in
File "", line 2214, in _find_and_load
File "", line 2203, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.3.0-py3.4.egg/polaris_health/monitors/init.py", line 61, in
File "", line 2214, in _find_and_load
File "", line 2203, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.3.0-py3.4.egg/polaris_health/monitors/http.py", line 7, in
File "", line 2214, in _find_and_load
File "", line 2203, in _find_and_load_unlocked
File "", line 1191, in _load_unlocked
File "", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.3.0-py3.4.egg/polaris_health/protocols/http.py", line 15, in
AttributeError: 'module' object has no attribute '_create_unverified_context'
Any idea ?
The mapping works when loading the initial config and mapping servers to regions, however I can not get the resolver to map clients to a topology record. I always get 'None' for the region regardless of how my config is setup.
Hello
in fresh installation on Debian jessie i got this error :
root@site:/opt/polaris/etc# /opt/polaris/bin/polaris-health start
Starting Polaris Health... Traceback (most recent call last):
File "/opt/polaris/bin/polaris-health-control", line 11, in <module>
from polaris_health import config, guardian
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1191, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.4.0-py3.4.egg/polaris_health/guardian/__init__.py", line 15, in <module>
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1191, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.4.0-py3.4.egg/polaris_health/tracker/__init__.py", line 11, in <module>
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1191, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.4.0-py3.4.egg/polaris_health/state/__init__.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1191, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.4.0-py3.4.egg/polaris_health/state/pool.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1191, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.4.0-py3.4.egg/polaris_health/monitors/__init__.py", line 60, in <module>
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1191, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.4.0-py3.4.egg/polaris_health/monitors/http.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1191, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1161, in _load_backward_compatible
File "/usr/local/lib/python3.4/dist-packages/polaris_gslb-0.4.0-py3.4.egg/polaris_health/protocols/http.py", line 16, in <module>
AttributeError: 'module' object has no attribute '_create_unverified_context'
failed to start!
polaris-lb.yml
pools:
www:
monitor: http
monitor_params:
use_ssl: false
hostname: new.site.com
url_path: /healthcheck?check_all=true
lb_method: wrr
fallback: any
members:
- ip: 185.155
name: haproxy-pa
weight: 1
- ip: 185.155
name: haproxy-af
weight: 1
globalnames:
new.site.com:
pool: www
ttl: 1
Os info :
Distributor ID: Debian
Description: Debian GNU/Linux 8.5 (jessie)
Release: 8.5
Codename: jessie
pdns info :
Aug 08 10:47:32 PowerDNS Authoritative Server 3.4.1 ([email protected]) (C) 2001-2014 PowerDNS.COM BV
Aug 08 10:47:32 Using 64-bits mode. Built on 20160510164059 by root@conan, gcc 4.9.2.
python info :
/usr/bin/python3
Python 3.4.2 (default, Oct 8 2014, 10:45:20)
any thing I missed in configuration?
thank you
i tried to read the code, but failed to figure it out.
i thought there must be something like sort by weight, or update weight by health check, but no.
anyone can tell me how does it work? thanks a lot.
Hi, i wanna use this good project as a GSLB for CDN scenario.I've read the Wiki carefully but don't find the dynamic configuration of the pools and don't know weather or not it has a smart DNS resolve according to the geo location of source IP?
Would you be so kind to give some advices for how can i solve this problems based on your project? like re-develop or have some hidden API in your project maybe?
Thanks!
Hello anton
I dont know this is the right place to ask questions but you said in pdns mailing list here is the place .
We have a web application Hosted on 2 datacenter with Haproxy as http load balancer , we route users to both servers using 2 dns A record (RR) . the problem is RR in dns isnt a good solution for us. I searched and I think polaris do the job for us but i cant fully understand it yet .
Look at this :
Polaris act as lb here checked loadbalancers then route the user to right datacenter using DNS means after running polaris I just only set A record to polaris server and polaris do the job then ?
is this true ? or I just create a new single point of failure ???
I add a new A record to polaris server and remove all its not working no errors reported in start-debug so maybe Im just think wrong or the tool is not what i am thinks is.
there is nothing on wiki about what then after installation .
thank you .
Dear sir,
How to add domain in pdns-server,
because now change to
launch=remote
remote-connection-string=pipe:command=/opt/polaris/bin/polaris-pdns,timeout=2000
no have sql to save...
thank
Hi,
When I configure my HTTP healthcheck to the following :
pools:
example1:
monitor: http
monitor_params:
use_ssl: true
hostname: www.example.com
url_path: /
interval: 30
expected_codes:
- 302
My polaris-gslb fails to start. However when I change the path that is expecting a 200, and remove the expected_codes: it works. Could you please look into this
Hi,
I got a question, on my master node i've configured my polaris with powerdns and I am using this configuration :
polaris-lb.yaml
pools:
www-example:
monitor: tcp
monitor_params:
port: 22
timeout: 1.0
lb_method: twrr
fallback: any
members:
- ip: 192.168.168.10
name: master
weight: 1
- ip: 192.168.168.100
name: minion-1
weight: 1
- ip: 192.168.168.101
name: minion-2
weight: 1
globalnames:
www.example.com:
pool: www-example
ttl: 1
polaris-topology.yaml
Master:
- 192.168.168.10/32
Minions:
- 192.168.168.96/29
When I do a lookup from my masternode (master region) i get the proper repsonse where it is always giving me back 192.168.168.10
[root@salt log]# dig www.example.com @192.168.168.10
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.47.rc1.el6_8.3 <<>> www.example.com @192.168.168.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8707
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;www.example.com. IN A
;; ANSWER SECTION:
www.example.com. 1 IN A 192.168.168.10
;; Query time: 92 msec
;; SERVER: 192.168.168.10#53(192.168.168.10)
;; WHEN: Fri Jan 20 11:36:44 2017
;; MSG SIZE rcvd: 49
However , when I'm doing exactly the same from my minion nodes (minions region) i'm getting responses where it gives me the IP of minion-1 and minion-2 , but it is also giving me the ip of the master.
Did I do something wrong in the configuration , or did i face a bug ?
I can't find anything about a log on the polaris pages. Is there a log where i can trace these lookups ? i didn't see anything when running polaris with start-debug
`
Hi,
I have done the polaris installation and followed all the steps mentioned.
Used dig command to check the desire output and I think I am getting proper result.
But after that there is nothing in the documentation what to do. I have setup a DNS and trying to open with http but it is not giving the expected output.
Please find below the steps I have done:
pools:
www-gslbpoc:
monitor: http
monitor_params:
use_ssl: false
hostname: myservice.gslbpoc.com
url_path: /gslb/health
port: 8080
expected_codes:
- 200
lb_method: wrr
fallback: any
members:
- ip: 10.247.10.1
name: www1-dc1
weight: 1
- ip: 10.247.20.1
name: www2-dc2
weight: 1
globalnames:
myservice.gslbpoc.com:
pool: www-gslbpoc
ttl: 1
Output of "polaris-memcache-control 127.0.0.1 get-ppdns-state":
{
"globalnames": {
"myservice.gslbpoc.com": {
"pool_name": "www-gslbpoc",
"ttl": 1
}
},
"pools": {
"www-gslbpoc": {
"dist_tables": {
"_default": {
"index": 1,
"rotation": [
"10.247.10.1",
"10.247.20.1"
],
"num_unique_addrs": 2
}
},
"fallback": "any",
"status": true,
"max_addrs_returned": 1,
"lb_method": "wrr"
}
},
"timestamp": 1557494215.115039
}
Output of "polaris-memcache-control 127.0.0.1 get-generic-state":
{
"timestamp": 1557494214.7694283,
"globalnames": {
"myservice.gslbpoc.com": {
"name": "myservice.gslbpoc.com",
"pool_name": "www-gslbpoc",
"ttl": 1
}
},
"pools": {
"www-gslbpoc": {
"name": "www-gslbpoc",
"monitor": {
"timeout": 5,
"name": "http",
"hostname": "myservice.gslbpoc.com",
"use_ssl": false,
"interval": 10,
"url_path": "/gslb/health",
"retries": 2,
"expected_codes": [
200
],
"port": 8080
},
"members": [
{
"name": "www1-dc1",
"status_reason": "monitor passed",
"retries_left": 2,
"status": true,
"monitor_ip": "10.247.10.1",
"ip": "10.247.10.1",
"region": "None",
"weight": 1
},
{
"name": "www2-dc2",
"status_reason": "monitor passed",
"retries_left": 2,
"status": true,
"monitor_ip": "10.247.20.1",
"ip": "10.247.20.1",
"region": "None",
"weight": 1
}
],
"fallback": "any",
"max_addrs_returned": 1,
"last_status": true,
"lb_method": "wrr"
}
}
}
Output of "dig @polaric-gslb-ip myservice.gslbpoc.com":
Ran it multiple time and got both the ips in the output.
; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> @polaric-gslb-ip myservice.gslbpoc.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10857
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1680
;; QUESTION SECTION:
;myservice.gslbpoc.com. IN A
;; ANSWER SECTION:
myservice.gslbpoc.com. 1 IN A 10.247.20.1
;; Query time: 0 msec
;; SERVER: polaric-gslb-ip#53(polaric-gslb-ip)
;; WHEN: Mon May 13 09:27:21 IDT 2019
;; MSG SIZE rcvd: 66
; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> @polaric-gslb-ip myservice.gslbpoc.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3441
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1680
;; QUESTION SECTION:
;myservice.gslbpoc.com. IN A
;; ANSWER SECTION:
myservice.gslbpoc.com. 1 IN A 10.247.10.1
;; Query time: 0 msec
;; SERVER: polaric-gslb-ip#53(polaric-gslb-ip)
;; WHEN: Mon May 13 09:27:23 IDT 2019
;; MSG SIZE rcvd: 66
[root@rhel7 ~]# curl -v http://polaric-ip:80
About to connect() to polaric-ip port 80 (#0)
Trying polaric-ip...
Connection refused
Failed connect to polaric-ip:80; Connection refused
Closing connection 0
curl: (7) Failed connect to polaric-ip:80; Connection refused
[root@rhel7 ~]# curl -v http://polaric-ip:8080
About to connect() to polaric-ip port 8080 (#0)
Trying polaric-ip...
Connection refused
Failed connect to polaric-ip:8080; Connection refused
Closing connection 0
curl: (7) Failed connect to polaric-ip:8080; Connection refused
[root@rhel7 ~]# curl -v http://myservice.gslbpoc.com:80
Could not resolve host: myservice.gslbpoc.com; Unknown error
Closing connection 0
curl: (6) Could not resolve host: myservice.gslbpoc.com; Unknown error
[root@rhel7 ~]# curl -v http://myservice.gslbpoc.com:8080
Could not resolve host: myservice.gslbpoc.com; Unknown error
Closing connection 0
curl: (6) Could not resolve host: myservice.gslbpoc.com; Unknown error
Please let me know if I missed any step.
I am attempting to fix this on my own, however I thought I would report it as well in case you have some ideas on how to fix it.
Here is the situation:
In order to invoke a config change polaris-health must be restarted. On restart it pushes a distribution table with everything marked UP. Only when a probe comes back unhealthy does it toggle it DOWN.
This is causing an issue as DNS traffic is getting sent to unhealthy nodes while the process gathers endpoint health and brings them offline.
The request is to either pull state from memcached (if available) on startup or provide a control/gaurdian command to just reload the config without reseting all state. Both would be absolutely ideal.
This would also be a performance boost to startup as currently the system is writing lots of 'unneeded' state changes to memcached.
The first configured member is handed out, unless it's down, then the next one is handed out etc.
This will require a change in the configuration format, pool members will become a list vs the current dictionary.
when loading a configuration we should spread out the health check execution time a little to even the load
Hi Anton,
I was wondering if it is possible to let all subdomains of a certain address to the same set of ip addresses.
So not using a nameserver like below :
zonenames:
example.com
pool: www-example
ttl: 1
nameservers:
- name: ns1.example.com
ttl: 14400
- name: ns2.example.com
ttl: 14400
but more like :
globalnames:
.example.com.:
pool: example.com.pool
ttl: 1
using :
pools:
example.com.pool:
monitor: tcp
monitor_params:
port: 443
timeout: 5
retries: 2
interval: 10
lb_method: wrr
fallback: any
members:
- ip: 10.165.209.22
name: lbapr1
weight: 1
- ip: 10.165.209.25
name: lbapr2
weight: 1
- ip: 10.165.209.26
name: lbapr3
weight: 1
I want to have all subdomains like a.example.com b.example.com c.example.com all resolve to my 3 ip addresses on which i'm having HA proxy determing to where my traffic should go.
If I load this config as described above , polaris-gslb starts without a problem , and i do see the global name and the pool in the get-ppdns-state table :
"example.com": {
"status": false,
"fallback": "any",
"lb_method": "wrr",
"max_addrs_returned": 1,
"dist_tables": {
"_default": {
"num_unique_addrs": 3,
"rotation": [
"10.165.209.22",
"10.165.209.25",
"10.165.209.26"
],
"index": 1
}
}
},
although when i do a dig on one of my subdomains it doesn't resolve :
dig john.example.com
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.47.rc1.el6_8.3 <<>> john.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 49183
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;john.example.com. IN A
;; Query time: 20 msec
;; SERVER: 10.10.155.92#53(10.10.155.92)
;; WHEN: Wed Sep 27 11:09:34 2017
;; MSG SIZE rcvd: 50
I would really appreciate your help
A potential change in pdns json api reported in #20 is causing a trailing dot to be seen in json api qname, we might be better off attempting to strip it before the qname gets processed further.
This seems like an awesome project, but I dont see any activity and would like to use but am uncertain how to proceed. Is there anyone at the helm?
I see direction to install on site and loadbalancer.com . I have done both ways and seem to work, though I havent fully tested transport. I then thought, oh this DNS is facing Interwebz and see pDNS has lots of security patches, then thought, does this support 4.1.4?
Hello,
I have tried polaris-gslb with pdns3.4.9 and Python3.4.5. Installation is fine. But DNS don't work.
"status": true I don't know why it's false
/opt/polaris/bin/polaris-memcache-control get-generic-state
{
"timestamp": 1478168302.3423028,
"pools": {
"www-example2": {
"name": "www-example2",
"fallback": "any",
"lb_method": "fogroup",
"members": [
{
"last_probe_issued_time": 1478168301.3214195,
"ip": "114.215.87.85",
"region": "None",
"name": "www2-dc1",
"retries_left": 0,
"status_reason": "404 Not Found",
"status": false,
"weight": 1
}
],
"max_addrs_returned": 1,
"last_status": false,
"monitor": {
"port": 80,
"interval": 10,
"hostname": "www.appextest.com",
"use_ssl": false,
"name": "http",
"url_path": "/health2.html",
"retries": 2,
"timeout": 5
}
},
"www-example": {
"name": "www-example",
"fallback": "any",
"lb_method": "fogroup",
"members": [
{
"last_probe_issued_time": 1478168301.321435,
"ip": "114.215.87.85",
"region": "None",
"name": "www1-dc1",
"retries_left": 0,
"status_reason": "ConnectionRefusedError [Errno 111] Connection refused during socket.connect()",
"status": false,
"weight": 1
}
],
"max_addrs_returned": 1,
"last_status": false,
"monitor": {
"port": 443,
"interval": 10,
"hostname": "www.example1211.com",
"use_ssl": true,
"name": "http",
"url_path": "/health.html",
"retries": 2,
"timeout": 5
}
}
},
"globalnames": {
"www.appextest.com": {
"ttl": 1,
"name": "www.appextest.com",
"pool_name": "www-example2"
},
"www.example1211.com": {
"ttl": 1,
"name": "www.example1211.com",
"pool_name": "www-example"
}
}
}
I wonder whether the master and slave configuration.
Not yet?
Hi Aaron,
I have one query related to polaris-health.
When it is running it updated the generic-state of the servers and based on this output, it takes the active server and send request on that.
But when I have stopped the polaris-health, I am still getting the generic-state output and it sends request to active server getting the generic-state but it quite possible that server is not active when we are sending request as polaris-health is stopped and not monitoring the servers.
Please let me if this is a bug or expected functionality?
Regards,
Nitin Agarwal
The installation document doesn't share to edit the startup scripts for pdns, or make a startup script for Polaris. There is an older blog that does. Can you reconcile? Do we combine the 2 resources, I am not able to get pdns working, and not sure what the issue is.
Hello,
I have tried polaris-gslb v. 0.4 with newest (git) pdns. With a config almost identical to an example i have these in debug output:
Jun 08 09:30:32 [remotebackend]: Polaris Remote Backend initialized request: {"method": "initialize", "parameters": {"command": "/opt/polaris/bin/polaris-pdns", "timeout": "2000"}} result: True pid: 29508 time taken: 0.000029
Jun 08 09:30:32 [remotebackend]: no globalname found for qname "www.example.com." request: {"method": "lookup", "parameters": {"local": "0.0.0.0", "qname": "www.example.com.", "qtype": "SOA", "real-remote": "192.168.51.202/32", "remote": "192.168.51.202", "zone-id": -1}} result: False pid: 29508 time taken: 0.001619
Jun 08 09:30:32 [remotebackend]: no globalname found for qname "example.com." request: {"method": "lookup", "parameters": {"local": "0.0.0.0", "qname": "example.com.", "qtype": "SOA", "real-remote": "192.168.51.202/32", "remote": "192.168.51.202", "zone-id": -1}} result: False pid: 29508 time taken: 0.000044
Jun 08 09:30:32 [remotebackend]: no globalname found for qname "com." request: {"method": "lookup", "parameters": {"local": "0.0.0.0", "qname": "com.", "qtype": "SOA", "real-remote": "192.168.51.202/32", "remote": "192.168.51.202", "zone-id": -1}} result: False pid: 29508 time taken: 0.000035
Jun 08 09:30:32 [remotebackend]: no globalname found for qname "." request: {"method": "lookup", "parameters": {"local": "0.0.0.0", "qname": ".", "qtype": "SOA", "real-remote": "192.168.51.202/32", "remote": "192.168.51.202", "zone-id": -1}} result: False pid: 29508 time taken: 0.000034
My polaris-lb.yaml looks like:
`pools:
www:
monitor: http
monitor_params:
use_ssl: false
hostname: www.example.com
url_path: /info.php
lb_method: fogroup
fallback: any
members:
- ip: 192.168.51.202
name: debian-test
weight: 1
- ip: 192.168.99.101
name: debian84
weight: 1
globalnames:
www.example.com:
pool: www
ttl: 1
`
This is debian jessie with some testing repo for new python:
python3
Python 3.5.1+ (default, May 9 2016, 11:00:17)
[GCC 5.3.1 20160429] on linux
I have 2 questions. Is it possible to set or modify the SOA record and NS records?
I usually delete a subdomain to my GSLB infrastructure. For example l.example.com
. So I would like to set the SOA record for those domains. I can see that there is a SOA record for the full entry from the config and it is polaris.example.com. hostmaster.polaris.example.com. 1 600 86400 1 3600
.
And it would also be nice to configure the NS entries for the GSLB domains. Is this planned?
Thank you
Marco
For example, timeout during sock.connect() vs sock.recv()
Hello, this is not really an issue but more of a question. What is the max number of pools / globalnames entries that can be handled by the LB configuration file at /opt/polaris/etc/polaris-lb.yaml? Could there be a limit to the size of the file or any other issue you can think of, such as speed of the health check, if we use this for 3,000-5,000 hostnames?
I want to add a number of domains and subdomains in polaris-lb.yaml
Please let me know.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.