martensson / nixy Goto Github PK
View Code? Open in Web Editor NEWnixy - nginx auto configuration and service discovery for Mesos/Marathon
License: MIT License
nixy - nginx auto configuration and service discovery for Mesos/Marathon
License: MIT License
The subdomain label needs to be checked for characters not valid in DNS.
Can potentially break the config if someone adds a "/" by mistake.
When nixy
is configured with invalid credentials it doesn't issue error messages as it should but instead operates happily with an empty set of apps/tasks.
Nifty little tool guys!
I am trying to use Service Ports to define where to listen for each app,
The nginx.tmpl example that nixy ships with shows :
listen 7000;
I am trying to get it to go like :
{{ $task := index $app.Tasks 0 }}
{{ $servicePort := index $task.ServicePorts 0 }}
listen {{ $servicePort }};
But I have nothing being generated in the nginx.conf
Wondering if ServicePorts are being picked up right from marathon.
I currently run Marathon +marathon lb which is serving this service port so i know that is configured right on Marathon.
Any pointers would be great !
Hey,
I like your software pretty much. I was just curious why the name field in the port definition was left out. I could use it for some use case. Is it intended? If not I would be willing to addd a PR to that.
With best regards
Hi,
We have noticed that in some instances we can end up with stale config being served by Nginx.
On digging further we see that in the upstream block in the candidate nginx.conf that is written out, is missing a host and just reads
server :
This seems to happen when the .Host variable in the .Tasks struct is empty.
Is it possible to add some simple filtering to nixy to ensure that tasks are only valid if they have both .Host and .Ports non empty?
We tried to suppress them in templating but that can potentially lead to an empty upstream block which is also invalid. Can't seem to find a clean way to do it in templating.
Thoughts?
Hi,
I have nixy 0.12.1 + nginx on separate server and Mesos/Marathon cluster (3 masters + 3 slaves) on Digital Ocean in the same data center.
Nixy runs on Ubuntu 16.04.3 with 1 min load average <= 0.3 and total CPU load <= 25% at any time.
Monitoring is done with Monit.
There are two quite strange issues. Your help or advice on these would be very much appreciated.
-- Logs begin at Tue 2017-09-05 09:24:47 CEST, end at Tue 2017-09-05 10:57:36 CEST. --
Sep 05 09:26:22 mnixy sh[1547]: time="2017-09-05T09:26:22+02:00" level=error msg="endpoint is down" endpoint="http://master-2:8080" error="Get http://master-2:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 09:27:28 mnixy sh[1547]: time="2017-09-05T09:27:28+02:00" level=error msg="endpoint is down" endpoint="http://master-2:8080" error="Get http://master-2:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 09:28:00 mnixy sh[1547]: time="2017-09-05T09:28:00+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 09:31:42 mnixy sh[1547]: time="2017-09-05T09:31:42+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 09:42:28 mnixy sh[1547]: time="2017-09-05T09:42:28+02:00" level=error msg="endpoint is down" endpoint="http://master-3:8080" error="Get http://master-3:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 09:43:11 mnixy sh[1547]: time="2017-09-05T09:43:10+02:00" level=error msg="endpoint is down" endpoint="http://master-3:8080" error="Get http://master-3:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:05:38 mnixy sh[1547]: time="2017-09-05T10:05:38+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:09:38 mnixy sh[1547]: time="2017-09-05T10:09:38+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:12:34 mnixy sh[1547]: time="2017-09-05T10:12:34+02:00" level=error msg="endpoint is down" endpoint="http://master-3:8080" error="Get http://master-3:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:23:28 mnixy sh[1547]: time="2017-09-05T10:23:28+02:00" level=error msg="endpoint is down" endpoint="http://master-2:8080" error="Get http://master-2:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:24:44 mnixy sh[1547]: time="2017-09-05T10:24:44+02:00" level=error msg="endpoint is down" endpoint="http://master-2:8080" error="Get http://master-2:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:25:04 mnixy sh[1547]: time="2017-09-05T10:25:04+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:28:30 mnixy sh[1547]: time="2017-09-05T10:28:30+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:31:16 mnixy sh[1547]: time="2017-09-05T10:31:16+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:33:33 mnixy sh[1547]: time="2017-09-05T10:33:32+02:00" level=error msg="endpoint is down" endpoint="http://master-3:8080" error="Get http://master-3:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:45:32 mnixy sh[1547]: time="2017-09-05T10:45:32+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:45:53 mnixy sh[1547]: time="2017-09-05T10:45:53+02:00" level=info msg="marathon reload triggered" client="127.0.0.1:55782"
Sep 05 10:45:53 mnixy sh[1547]: time="2017-09-05T10:45:53+02:00" level=info msg="no config changes"
Sep 05 10:47:35 mnixy sh[1547]: time="2017-09-05T10:47:35+02:00" level=error msg="endpoint is down" endpoint="http://master-3:8080" error="Get http://master-3:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:53:02 mnixy sh[1547]: time="2017-09-05T10:53:02+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:53:10 mnixy sh[1547]: time="2017-09-05T10:53:10+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Sep 05 10:56:56 mnixy sh[1547]: time="2017-09-05T10:56:56+02:00" level=error msg="endpoint is down" endpoint="http://master-1:8080" error="Get http://master-1:8080/ping: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
However health check is just fine and no nodes been marked as un-healthy.
Taking a look on hosts metrics - all looks just good, no CPU spikes or network timeouts/errors.
/api/v1/health
once a minute and gets this once in a while (2-3 times for an hour)check host nixy
with address localhost
if failed
port 6600
protocol HTTP
request "/v1/health"
with timeout 3 seconds
then alert
nixy.toml (comments stripped):
# Nixy listening port
port = "6600"
xproxy = ""
marathon = ["http://master-1:8080", "http://master-2:8080", "http://master-3:8080"]
user = ""
pass = ""
realm = ""
nginx_config = "/etc/nginx/nginx.conf"
nginx_template = "/opt/nixy/nginx.tmpl"
nginx_cmd = "/opt/nixy/generate_index_html.sh"
nginx_ignore_check = false
[statsd]
addr = "localhost:8125"
namespace = "nixy.mesos-test"
sample_rate = 100
/opt/nixy/generate_index_html.sh (wrapper to generate list of running services and reload nginx)
#!/usr/bin/env bash
#
# Wrapper script for nixy to pass all command line parameters to nginx (if any)
# and update static microservices list page on successfull nginx reload
# Keep environment clean
LC_ALL=C
# HTML index page path
readonly HTML="/opt/nixy/html/index.html"
# Fail on any non-zero status code
set -e
# Run nginx with all command line parameters passed (if any)
if [[ -n "$@" ]]; then
nginx "$@"
fi
# Get website list from nginx as array, excluding 'localhost', 'list' and 'logentries'
SITE_LIST=($(nginx -T 2> /dev/null |grep -E -o "server_name\ (.*);$" |grep -vE "\ \_|\ localhost|\ list\.|\ logentries" |awk '{ print $2 }' |tr -d ';'))
# Overwrite index page with HTML header
cat > $HTML <<- EOM
<html>
<head>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous">
<script type="text/javascript" src="http://livejs.com/live.js"></script>
<title>Available services</title>
</head>
<body>
<table class="table">
<ul>
EOM
# Append services URL
for elem in "${SITE_LIST[@]}"; do
cat >> $HTML <<- EOM
<tr>
<td><a href='http://$elem' target='_blank'>$elem</a></td>
</tr>
EOM
done
# Append HTML index page with footer
cat >> $HTML <<- EOM
</ul>
</table>
<p class="small">
Last updated: $(date)
</p>
</body>
</html>
EOM
# EOF
Nginx template is pretty much standard, so I won't list it here.
Thanks in advance for your help!
Currently ports in ipAddress/discovery/ports not supported
{
"id": "nginx-mesos",
"container": {
"type": "MESOS",
"docker": {
"image": "nginx:1.10.2-alpine"
}
},
"cpus": 0.5,
"mem": 64.0,
"ipAddress": {
"networkName": "calico-network",
"labels": {
"app": "nginx",
"group": "production"
},
"discovery": {
"ports": [
{ "number": 80, "name": "http", "protocol": "tcp" }
]
}
}
}
And portDefinitions is not available in ip-per-container mode.
Hello,
Is there any way to get the application health check path (deployed in marathon) in nixy?
I changed the nixy code to get it and am able to get it successfully. But is there is any other way to get it?
Thanks
Currently code looks like:
for _, app := range jsonapps.Apps {
OUTER:
for _, task := range jsontasks.Tasks {
if task.AppId != app.Id {
continue
}
So list of tasks is iterated through many times (as many as number of apps). The idea is to first iterate over list of tasks and create map appIdToTasks
. Then iterate over list of apps and get appropriate value from appIdToTasks
. This way complexity from O(len(apps) * len(tasks)) will be improved to O(len(tasks)) + O(len(apps)).
Hi, I just upgraded my nixy version to 0.7.0 and I'm getting the following error when nixy is trying to update the config:
time="2016-08-12T17:35:49-03:00" level=error msg="unable to generate nginx config" error="rename /tmp/nixy020179483 /etc/nginx/nginx.conf: invalid cross-device link"
Do you guys have any idea why I'm getting this error? It seems like this "/tmp/nixy020179483" file does not exist.
ps: I'm using the old template
ps2: for now I'm using the 0.6.0, as before.
Hi,
I have another pretty much weird issue with Nixy, which I have difficulties to troubleshoot, would ask if you experience something similar or may be help with a way forward to troubleshoot.
% nixy -v
version: 0.12.1
commit: bbe2b532e80088c212d16b7cca626549a60c1e70
date: 2017-07-28T09:14:38Z
% lsb_release -d
Description: Ubuntu 16.04.3 LTS
nixy.toml (comments stripped):
# Nixy listening port
port = "6600"
xproxy = ""
marathon = ["http://master-1:8080", "http://master-2:8080", "http://master-3:8080"]
user = ""
pass = ""
realm = ""
nginx_config = "/etc/nginx/nginx.conf"
nginx_template = "/opt/nixy/nginx.tmpl"
nginx_cmd = "/opt/nixy/generate_index_html.sh"
nginx_ignore_check = false
[statsd]
addr = "localhost:8125"
namespace = "nixy.mesos-test"
sample_rate = 100
/opt/nixy/generate_index_html.sh
is the nginx wrapper script that makes simple static web page with the list of the services running (parsing nginx.conf)
Issue is that starting some moment (from 5 minutes to few hours) web page is updated once a minute while there are no events coming from Marathon. And there's a nice cute bash zombie process hanging around. Nginx configuration is generated just fine and Nixy is running.
% ps axuww |grep -E "[Nn]ixy|[Bb]ash.*\<d"
root 1560 0.0 0.0 4508 716 ? Ss 11:40 0:00 /bin/sh -c /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 1566 0.2 1.2 126384 12880 ? Sl 11:40 0:30 /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 28284 0.0 0.0 0 0 ? Zs 14:43 0:00 [bash] <defunct>
There's always eactly one zombie bash (so, number is not growing), changing it's PID once a while (hard to catch exact timing). It already changed while I was writing
% ps axuww |grep -E "[Bb]ash.*\<d"
root 29696 0.0 0.0 0 0 ? Zs 14:53 0:00 [bash] <defunct>
There's no other bash script that could be run there. I'm also writing log from script to understand when script got triggered and it doesn't really match Nixy logs. This is last entry for today:
sh[1560]: time="2017-09-19T12:07:05+02:00" level=info msg="marathon event received" endpoint="http://master-1:8080" event="remove_health_check_event"
sh[1560]: time="2017-09-19T12:07:05+02:00" level=info msg="no config changes"
And this is script log
% tail -f /tmp/gen.log
Tue Sep 19 15:04:56 CEST 2017 Start running: '/opt/nixy/generate_index_html.sh', PID 31282
nginx 1868 0.0 0.3 37628 3088 ? S 11:40 0:00 \_ nginx: worker process
root 1560 0.0 0.0 4508 716 ? Ss 11:40 0:00 /bin/sh -c /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 1566 0.2 1.2 126384 12888 ? Sl 11:40 0:34 \_ /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 31282 0.0 0.3 19704 3164 ? S 15:04 0:00 \_ /bin/bash /opt/nixy/generate_index_html.sh -c /etc/nginx/nginx.conf -t
root 31284 0.0 0.3 36228 3460 ? R 15:04 0:00 \_ ps axuwwf
root 31285 0.0 0.0 12944 944 ? S 15:04 0:00 \_ grep -A 3 -B 3 31282
root 1813 0.1 1.8 438664 18732 ? Sl 11:40 0:14 /usr/bin/python3 /usr/bin/fail2ban-server -s /var/run/fail2ban/fail2ban.sock -p /var/run/fail2ban/fail2ban.pid -x -b
root 1886 0.0 0.4 65408 4640 ? Ss 11:40 0:00 /usr/lib/postfix/sbin/master
postfix 1898 0.0 0.4 67524 4432 ? S 11:40 0:00 \_ qmgr -l -t unix -u
Tue Sep 19 15:04:56 CEST 2017 Stop running: '/opt/nixy/generate_index_html.sh', PID 31282
Tue Sep 19 15:04:56 CEST 2017 Start running: '/opt/nixy/generate_index_html.sh', PID 31345
nginx 1868 0.0 0.3 37628 3088 ? S 11:40 0:00 \_ nginx: worker process
root 1560 0.0 0.0 4508 716 ? Ss 11:40 0:00 /bin/sh -c /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 1566 0.2 1.2 126384 12888 ? Sl 11:40 0:34 \_ /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 31345 0.0 0.3 19704 3084 ? S 15:04 0:00 \_ /bin/bash /opt/nixy/generate_index_html.sh -c /etc/nginx/nginx.conf -t
root 31347 0.0 0.3 36228 3524 ? R 15:04 0:00 \_ ps axuwwf
root 31348 0.0 0.0 12944 932 ? S 15:04 0:00 \_ grep -A 3 -B 3 31345
root 1813 0.1 1.8 438664 18732 ? Sl 11:40 0:14 /usr/bin/python3 /usr/bin/fail2ban-server -s /var/run/fail2ban/fail2ban.sock -p /var/run/fail2ban/fail2ban.pid -x -b
root 1886 0.0 0.4 65408 4640 ? Ss 11:40 0:00 /usr/lib/postfix/sbin/master
postfix 1898 0.0 0.4 67524 4432 ? S 11:40 0:00 \_ qmgr -l -t unix -u
Tue Sep 19 15:04:56 CEST 2017 Stop running: '/opt/nixy/generate_index_html.sh', PID 31345
Tue Sep 19 15:05:56 CEST 2017 Start running: '/opt/nixy/generate_index_html.sh', PID 31435
nginx 1868 0.0 0.3 37628 3088 ? S 11:40 0:00 \_ nginx: worker process
root 1560 0.0 0.0 4508 716 ? Ss 11:40 0:00 /bin/sh -c /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 1566 0.2 1.2 126384 12888 ? Sl 11:40 0:34 \_ /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 31435 0.0 0.3 19704 3224 ? S 15:05 0:00 \_ /bin/bash /opt/nixy/generate_index_html.sh -c /etc/nginx/nginx.conf -t
root 31437 0.0 0.3 36228 3436 ? R 15:05 0:00 \_ ps axuwwf
root 31438 0.0 0.0 12944 936 ? S 15:05 0:00 \_ grep -A 3 -B 3 31435
root 1813 0.1 1.8 438664 18732 ? Sl 11:40 0:14 /usr/bin/python3 /usr/bin/fail2ban-server -s /var/run/fail2ban/fail2ban.sock -p /var/run/fail2ban/fail2ban.pid -x -b
root 1886 0.0 0.4 65408 4640 ? Ss 11:40 0:00 /usr/lib/postfix/sbin/master
postfix 1898 0.0 0.4 67524 4432 ? S 11:40 0:00 \_ qmgr -l -t unix -u
Tue Sep 19 15:05:56 CEST 2017 Stop running: '/opt/nixy/generate_index_html.sh', PID 31435
Tue Sep 19 15:05:56 CEST 2017 Start running: '/opt/nixy/generate_index_html.sh', PID 31498
nginx 1868 0.0 0.3 37628 3088 ? S 11:40 0:00 \_ nginx: worker process
root 1560 0.0 0.0 4508 716 ? Ss 11:40 0:00 /bin/sh -c /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 1566 0.2 1.2 126384 12888 ? Sl 11:40 0:34 \_ /opt/nixy/nixy -f /opt/nixy/nixy.toml
root 31498 0.0 0.3 19704 3088 ? S 15:05 0:00 \_ /bin/bash /opt/nixy/generate_index_html.sh -c /etc/nginx/nginx.conf -t
root 31500 0.0 0.3 36228 3424 ? R 15:05 0:00 \_ ps axuwwf
root 31501 0.0 0.1 12944 1088 ? S 15:05 0:00 \_ grep -A 3 -B 3 31498
root 1813 0.1 1.8 438664 18732 ? Sl 11:40 0:14 /usr/bin/python3 /usr/bin/fail2ban-server -s /var/run/fail2ban/fail2ban.sock -p /var/run/fail2ban/fail2ban.pid -x -b
root 1886 0.0 0.4 65408 4640 ? Ss 11:40 0:00 /usr/lib/postfix/sbin/master
postfix 1898 0.0 0.4 67524 4432 ? S 11:40 0:00 \_ qmgr -l -t unix -u
Tue Sep 19 15:05:56 CEST 2017 Stop running: '/opt/nixy/generate_index_html.sh', PID 31498
% date
Tue Sep 19 15:06:04 CEST 2017
So, it seems that Nixy is trying to run two instances of the script once a minute and I don't understand why. I'm doing something wrong and I do not understand what exactly.
Here's the script for your convenience
#!/bin/bash
#
# Wrapper script for nixy to pass all command line parameters to nginx (if any)
# and update static microservices list page on successfull nginx reload
echo "$(date) Start running: '$0', PID $$" >> /tmp/gen.log
ps axuwwf |grep -A 3 -B 3 "$$" >> /tmp/gen.log
# Write safe shell scripts
set -euf
# HTML index page path
readonly HTML="/opt/nixy/html/index.html"
# Keep environment clean
LC_ALL=C
readonly BASENAME_ZERO="$(basename "$0")"
readonly TMP_DIR="/tmp"
readonly TMP_HTML="${TMP_DIR}/${BASENAME_ZERO}.$$"
trap 'rm -f ${TMP_HTML}' EXIT 1 2 3 13 15
# Run nginx with all command line parameters passed (if any)
if [[ -n "$@" ]]; then
nginx "$@"
fi
# Get website list from nginx as array, excluding 'localhost', 'list' and 'logentries'
declare -a SITE_LIST=()
SITE_LIST=($(nginx -T 2> /dev/null |grep -E -o "server_name\ (.*);$" \
| grep -vE "\ \_|\ localhost|\ list\.|\ logentries" \
| awk '{for (i=2; i<=NF; i++) print $i}' \
| tr -d ';'))
# Get services based on names
declare -A SERVICES=()
if [[ ${#SITE_LIST[@]} -gt 0 ]]; then
for i in "${SITE_LIST[@]}"; do
SVC=$(echo "${i}" | cut -d'.' -f2)
SERVICES[${SVC}]+="${i} "
done
fi
# Overwrite index page with HTML header
cat > "${TMP_HTML}" <<- EOM
<html>
<head>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous">
<script type="text/javascript" src="http://livejs.com/live.js"></script>
<title>Available services</title>
</head>
<body>
<table class="table">
<ul>
EOM
# Append grouped services URL
if [[ ${#SERVICES[@]} -gt 0 ]]; then
for elem in "${!SERVICES[@]}"; do
cat >> "${TMP_HTML}" <<- EOM
<tr>
<th>${elem}:</th>
</tr>
EOM
read -r -a SLIST <<< "${SERVICES[$elem]}"
for i in "${SLIST[@]}"; do
cat >> "${TMP_HTML}" <<- EOM
<tr>
<td><a href='http://$i' target='_blank'>$i</a></td>
</tr>
EOM
done
done
fi
# Append HTML index page with footer
cat >> "${TMP_HTML}" <<- EOM
</ul>
</table>
<p class="small">
Last updated: $(date)
</p>
</body>
</html>
EOM
# Move resulting file to index page
mv "${TMP_HTML}" "${HTML}"
echo "$(date) Stop running: '$0', PID $$" >> /tmp/gen.log
# EOF
And here's nginx.tmpl
# Generated by nixy {{ datetime }}
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
add_header X-Proxy {{ .Xproxy }} always;
server_names_hash_bucket_size 256;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
server_tokens off;
#access_log /var/log/nginx/access.log main;
access_log off;
error_log /var/log/nginx/error.log warn;
sendfile on;
#tcp_nopush on;
client_max_body_size 128m;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_redirect off;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# time out settings
proxy_send_timeout 120;
proxy_read_timeout 120;
send_timeout 120;
keepalive_timeout 10;
#gzip on;
server {
listen 80 default_server;
server_name _;
# Everything is a 503
location / {
return 503;
}
}
server {
listen 127.0.0.1:80;
listen [::1]:80;
server_name localhost;
location / {
return 204;
}
location /health {
access_log off;
return 200 'OK. I am healthy.';
add_header Content-Type text/plain;
}
# Enable nginx status page
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
server {
listen 80;
server_name list.test;
root /opt/nixy/html;
location / {
try_files $uri /index.html;
}
}
{{- range $id, $app := .Apps }}
{{- $appName := index $app.Hosts 0 }}
### Configuration for {{ $appName }}
upstream {{ $appName }} {
{{- range $app.Tasks }}
server {{ .Host }}:{{ index .Ports 0 }};
{{- end }}
}
server {
listen 80;
{{- range $app.Hosts }}
{{- $svc := split $appName "." }}
server_name {{ . }}{{ if $app.Labels.ms_group }}{{ range split $app.Labels.ms_group " " }} {{ . }}.{{ index $svc 1 }}.{{ index $svc 2 }}{{ end }}{{ end }};
{{- end }}
location / {
proxy_set_header HOST $host;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_connect_timeout 30;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://{{ index $app.Hosts 0 }};
}
}
{{- end }}
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I have an nginx server with the nginx statsd module built in. It would be very convenient and harmonious if the same statsd addr I pass to nixy can make its way to the nginx configuration.
For example, an nginx.tmpl
file.
http {
add_header X-Proxy {{ .Xproxy }} always;
statsd_server {{ .Statsd }};
server { ... }
}
I can see changing json:"-"
to json:",omitempty"
or removing the constraint altogether. What do you think?
We're running Marathon in HA mode, meaning that we have 3 Marathon instances which are "clustered" via ZooKeeper.
So, for us it wouldn't necessarily make sense to use a static Marathon URL as it seems to be implemented now. If the exact instance would go down (for whatever reason), we could not make use of the HA feature of Marathon IMHO, and nixy
would be in an inoperable state (if I understand correctly).
Possible solutions for (additionally) be able to use HA:
webui_url
of the Marathon framework via the Mesos Master's /state
endpoint (I guess first you'd have to get the actual Master address via the ZK connection string of Mesos), orReferences:
Hi, Benjamin :)
I'm seeing a lot of 503 responses right after deploying an application with Marathon. I think it happens because after the deployment has successfully added the new app instance, both the old and the new app instances will be present in upstream
, but Marathon will shortly thereafter kill the old instance. My current theory is that the new instance, although present in upstream
, is marked as down by nginx while the old instance is still marked as up. Therefore traffic keeps being passed to the old instance and when Marathon kills the old instance there are no healthy apps left in upstream
. That means it will take 10 seconds (the default fail_timeout
) before the new instance is tried again.
It would be interesting to see if adding fail_timeout=0
to https://github.com/martensson/nixy/blob/master/nginx.tmpl#L32 could resolve this
What do you think?
@martensson I see that you strictly validate nginx config here. The problem is that it is impossible to use nixy for virtual host generation with this approach. Any validation fails with upstream configuration from/etc/nginx/sites-enabled/...
. Can you make validation optional for this case?
thanks
?
Just wondering if you might have any template examples that support URL/path based routing for different apps sharing the same DNS/Hostname? With my current template Nginx refuses to start because I end up with many server blocks that have the same hostname.
I'm currently running Marathon-lb but am investigating using Nixy/Nginx. We route many microservices for the same Hostname to different paths (/common/ms1, /common/ms2, etc) which is achievable with MLB by assigning apps the same VHOST label with different PATH labels for the path routing.
That would require generating a single server block shared by many apps with multiple location blocks routing to each app in marathon.
Any suggestions/pointers would be greatly appreciated, thanks!
Cool app by the way, thanks. Makes life really easy for us.
I am testing a Blue-green/AB deployment where two (distinct) apps have the same NIXY_REALM
and subdomain
tags, but nixy only lists tasks running for just one app. Is this the normal behaviour or am I missing something?
nixy version:
version: 0.13.0
commit: 776381f991adb5fb8f91cdde0f170ba8704121fa
date: 2017-11-13T12:56:12Z
Thanks
In order to have crash reporting capabilities Nixy could have optional integration with e.g. Sentry (https://docs.getsentry.com/hosted/clients/go/).
Hi, i've just tested nixy, works great, thank you.
Do you plan implementing a filter possibility on the apps? We have some apps which we do not want to proxy to the outside world. We also have some productive web containers and some internal ones and would like to be able to use some marathon labels or other environment variables to dinstinguish between those (and run two different instances of naxi of course).
Thanks
Krassi
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.