geerlingguy / raspberry-pi-dramble Goto Github PK
View Code? Open in Web Editor NEWDEPRECATED - Raspberry Pi Kubernetes cluster that runs HA/HP Drupal 8
Home Page: http://www.pidramble.com/
License: MIT License
DEPRECATED - Raspberry Pi Kubernetes cluster that runs HA/HP Drupal 8
Home Page: http://www.pidramble.com/
License: MIT License
See: drush-ops/drush#1409
Originally discovered in #47
I have an old 64GB USB 3.0 SSD I pulled from an old MacBook Air, and it gets 150-300MB/sec transfer, with quite low latency. It would definitely saturate the USB 2.0 bus on the Pi, but it might be a heck of a lot faster as the data store for MySQL than using the normal mount inside the cheap Kingston microSD card!
It's worth investigating at least, and maybe in the database playbook, the configuration could allow for internal vs. external drive configuration.
$ sudo fdisk -l
fdisk
to edit the disk's partition table: $ sudo fdisk /dev/sda1
*
d
, n
, p
, 1
, <enter>
, <enter>
, w
$ sudo mkfs -t ext4 /dev/sda1
*/ssd
) and mount the partition: $ sudo mount /dev/sda1 /ssd
$ sudo nano /etc/fstab
See this post for more tips.
We're assuming the device is sda1
. It could be something else, depending on your system's configuration/hardware.
hdparm
and dd
Install hdparm
first, with $ sudo apt-get install -y hdparm
.
Read speed, SSD:
$ sudo hdparm -t /dev/sda1
Timing buffered disk reads: 72 MB in 3.05 seconds = 23.59 MB/sec
$ sudo hdparm -T /dev/sda1
Timing cached reads: 464 MB in 2.01 seconds = 231.31 MB/sec
Read speed, internal microSD (Kingston Class 10):
$ sudo hdparm -t /dev/mmcblk0
Timing buffered disk reads: 38 MB in 3.02 seconds = 12.57 MB/sec
$ sudo hdparm -T /dev/mmcblk0
Timing cached reads: 488 MB in 2.00 seconds = 244.02 MB/sec
Read speed, internal microSD (SanDisk Ultra Class 10):
$ sudo hdparm -t /dev/mmcblk0
Timing buffered disk reads: 54 MB in 3.07 seconds = 17.59 MB/sec
$ sudo hdparm -T /dev/mmcblk0
Timing cached reads: 460 MB in 2.00 seconds = 229.54 MB/sec
Read speed, internal microSD (Transcend Premium 300x 32GB):
$ sudo hdparm -t /dev/mmcblk0
Timing buffered disk reads: 54 MB in 3.11 seconds = 17.38 MB/sec
$ sudo hdparm -T /dev/mmcblk0
Timing cached reads: 498 MB in 2.00 seconds = 249.00 MB/sec
Read speed, internal microSD (Cheapo Class 4 4GB):
$ sudo hdparm -t /dev/mmcblk0
Timing buffered disk reads: 42 MB in 3.14 seconds = 13.37 MB/sec
$ sudo hdparm -T /dev/mmcblk0
Timing cached reads: 430 MB in 2.00 seconds = 214.55 MB/sec
Write speed, SSD:
$ sudo dd if=/dev/zero of=/ssd/output bs=8k count=10k; sudo rm -f /ssd/output
83886080 bytes (84 MB) copied, 2.31056 s, 36.3 MB/s
Write speed, internal microSD (Kingston Class 10):
$ sudo dd if=/dev/zero of=/tmp/output bs=8k count=10k; sudo rm -f /tmp/output
83886080 bytes (84 MB) copied, 2.02959 s, 41.3 MB/s
Write speed, internal microSD (SanDisk Ultra Class 10):
$ sudo dd if=/dev/zero of=/tmp/output bs=8k count=10k; sudo rm -f /tmp/output
83886080 bytes (84 MB) copied, 2.03785 s, 41.2 MB/s
Write speed, internal microSD (Transcend Premium 300x 32GB):
$ sudo dd if=/dev/zero of=/tmp/output bs=8k count=10k; sudo rm -f /tmp/output
83886080 bytes (84 MB) copied, 1.84696 s, 45.4 MB/s
Write speed, internal microSD (Cheap Class 4 4GB):
$ sudo dd if=/dev/zero of=/tmp/output bs=8k count=10k; sudo rm -f /tmp/output
83886080 bytes (84 MB) copied, 0.492591 s, 170 MB/s
...looks like I need to do some more benchmarking and see whether the internal microSD card can hold its own against a fast SSD over USB 2.0! Maybe the Pi 2 is even better than the Pi in terms of latency/sustained speed for the directly-connected microSD slot?
iozone
iozone
on the Raspberry Pi$ wget http://www.iozone.org/src/current/iozone3_430.tar
$ cat iozone3_430.tar | tar -x
$ cd iozone3_430/src/current
$ make linux-arm
./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2
Internal Kingston 8GB microSD
$ ./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2
random random
kB reclen write rewrite read reread read write
102400 4 1029 1204 4895 4797 4245 82
102400 512 9854 10237 13693 13661 13640 1116
102400 16384 10326 10368 13852 13826 13850 10363
Internal SanDisk Ultra 16GB microSD
$ ./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2
random random
kB reclen write rewrite read reread read write
102400 4 1269 1467 4311 4311 4252 764
102400 512 9102 8252 18529 18467 18517 1571
102400 16384 7485 10029 18840 18843 18837 9605
Internal Transcend Premium 300x 32GB microSD
$ ./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2
random random
kB reclen write rewrite read reread read write
102400 4 1369 1107 5047 5042 4280 933
102400 512 9349 10955 18766 18667 18732 5007
102400 16384 11067 11802 19046 19047 18961 11710
Internal Cheapo Class4 4GB microSD
$ ./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2
TODO
External USB 3.0 64GB SSD
$ ./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2 -f /ssd/test
random random
kB reclen write rewrite read reread read write
102400 4 3877 5004 5184 5193 4553 4943
102400 512 29181 29630 30053 30036 29394 29554
102400 16384 31528 31540 32025 32153 32236 31446
iozone
usageThis is just to demonstrate the incredibly wide gap between a modest modern workstation's I/O capabilities and the Raspberry Pi (for anyone who believes that optimizing a Raspberry Pi for modern high performance computing applications is potentially worth the effort for anything besides education and fun).
Write speed, internal SSD (PCIe):
$ sudo dd if=/dev/zero of=/tmp/output bs=8k count=100k; sudo rm -f /tmp/output
838860800 bytes transferred in 1.295498 secs (647519884 bytes/sec) (647 MB/sec)
Write speed, external SSD (USB 3.0):
$ sudo dd if=/dev/zero of=/tmp/output bs=8k count=100k; sudo rm -f /tmp/output
838860800 bytes transferred in 5.186630 secs (161735230 bytes/sec) (162 MB/sec)
Internal SSD (PCIe):
$ ./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2
random random
kB reclen write rewrite read reread read write
102400 4 349051 378714 2057008 2052849 1757784 384525
102400 512 378961 381589 1633720 1831942 1602355 397143
102400 16384 345783 255575 1726816 1740601 1561742 384016
External SSD (USB 3.0):
$ ./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2 -f /ssd/test
random random
kB reclen write rewrite read reread read write
102400 4 106728 93730 1954872 1809223 1739037 164208
102400 512 140001 124136 1746128 1798353 1691446 160632
102400 16384 81202 101984 1545561 1809853 1528714 112968
External old-fashioned 2TB spinning disk (USB 3.0):
$ ./iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2 -f /hdd/test
random random
kB reclen write rewrite read reread read write
102400 4 92336 90505 1969492 2007127 1834830 86063
102400 512 99231 82251 1759390 1654712 1496792 95612
102400 16384 96631 87459 1532236 1525795 1325369 92766
(To build iozone on Mac OS X, I had to run make macosx
instead of make linux-arm
in step 4 above).
It's an annoying step in the configuration, and I hate having to do it every time I re-flash an SD card. So let's get it automated as much as possible!
tl;dr Getting Gigabit Networking on a Raspberry Pi 2 and B+
I'd like to see what gives us the most bang for the buck, especially with regard to the load balancer and database server, which will need the lowest latency and highest throughput.
Switching interfaces (via SSH, since I'm doing everything headless) is a simple matter of:
$ sudo ifdown [eth0|wlan0]
(whichever one you are not connected through).$ sudo ifconfig -a
to list all interfaces and verify current status.$ ping 8.8.8.8
to test Internet connectivity, and if it's down:
$ ip route show
to make sure there's a default
route configured.$ sudo ip route add default via 10.0.1.1 dev [eth0|wlan0]
(whichever one you are connected through).$ ping 8.8.8.8
again to test Internet connectivity.Using /tmp
(on the internal Kingston Class 10 microSD card):
$ wget -O /tmp/test100.zip http://speedtest.wdc01.softlayer.com/downloads/test100.zip
2015-02-15 15:04:17 (6.05 MB/s) - `/tmp/test100.zip' saved [104874307/104874307]
$ rsync --progress [email protected]:/tmp/test100.zip ~/Downloads/test100.zip
sent 42 bytes received 104887203 bytes 2954570.28 bytes/sec (2.94MB/sec)
$ rsync --progress ~/Downloads/test100.zip [email protected]:/tmp/test100.zip
sent 104887199 bytes received 42 bytes 3555499.69 bytes/sec (3.55MB/sec)
Using /ssd
(on the external USB 3.0 SSD):
$ wget -O /ssd/test100.zip http://speedtest.wdc01.softlayer.com/downloads/test100.zip
2015-02-15 15:18:22 (6.32 MB/s) - `/ssd/test100.zip' saved [104874307/104874307]
$ rsync --progress [email protected]:/ssd/test100.zip ~/Downloads/test100.zip
sent 42 bytes received 104887203 bytes 2796993.20 bytes/sec (2.80MB/sec)
$ rsync --progress ~/Downloads/test100.zip [email protected]:/ssd/test100.zip
sent 104887199 bytes received 42 bytes 3555499.69 bytes/sec (3.55MB/sec)
By testing on an SSD and on the microSD card, it seems pretty obvious that disk I/O is not the bottleneck here, but rather the entire bus (so it seems).
And it also seems likely that, with a decent enough microSD card, there's no real performance to be gained (at least network/throughput-wise) from using an external HDD or SSD. I'm going to also test random read/write scenarios over in #7, so that might shed more light on database/codebase activity, and optimizations to be had there...
Note also that my local Internet connection is a bit past saturating the Pi's network interface (heck, it requires Gigabit or 802.11ac with a strong signal to saturate the connection on my Macs!). Here's the initial download, running on my MacBook Air with 802.11ac a couple feet from my AirPort Extreme:
$ wget -O /dev/null http://speedtest.wdc01.softlayer.com/downloads/test100.zip
2015-02-15 16:01:04 (8.77 MB/s) - '/dev/null' saved [104874307/104874307]
Using /tmp
(on the internal Kingston Class 10 microSD card):
$ wget -O /tmp/test100.zip http://speedtest.wdc01.softlayer.com/downloads/test100.zip
2015-02-15 15:32:44 (2.94 MB/s) - `/tmp/test100.zip' saved [104874307/104874307]
$ rsync --progress [email protected]:/tmp/test100.zip ~/Downloads/test100.zip
sent 42 bytes received 104887203 bytes 2589808.52 bytes/sec (2.59MB/sec)
$ rsync --progress ~/Downloads/test100.zip [email protected]:/tmp/test100.zip
sent 104887199 bytes received 42 bytes 2873623.04 bytes/sec (2.87MB/sec)
WiFi is slightly slower; the signal was great, and I think it was hitting the full bus speed here as well (just like with the 10/100 example). But pretty stable and fast regardless. I have no hesitation using a reliable little WiFi adapter in lieu of wired Ethernet when it's more convenient.
Using /tmp
(on the internal Kingston Class 10 microSD card):
$ wget -O /tmp/test100.zip http://speedtest.wdc01.softlayer.com/downloads/test100.zip
2015-02-15 15:59:04 (2.69 MB/s) - `/tmp/test100.zip' saved [104874307/104874307]
$ rsync --progress [email protected]:/tmp/test100.zip ~/Downloads/test100.zip
sent 42 bytes received 104887203 bytes 2655373.29 bytes/sec (2.66MB/sec)
$ rsync --progress ~/Downloads/test100.zip [email protected]:/tmp/test100.zip
sent 104887199 bytes received 42 bytes 3438925.93 bytes/sec (3.43MB/sec)
Using /tmp
(on the internal Kingston Class 10 microSD card):
$ wget -O /tmp/test100.zip http://speedtest.wdc01.softlayer.com/downloads/test100.zip
2015-02-16 21:07:19 (6.32 MB/s) - `/tmp/test100.zip' saved [104874307/104874307]
$ rsync --progress [email protected]:/tmp/test100.zip ~/Downloads/test100.zip
sent 42 bytes received 104887203 bytes 2954570.28 bytes/sec (2.95MB/sec)
$ rsync --progress ~/Downloads/test100.zip [email protected]:/tmp/test100.zip
sent 104887199 bytes received 42 bytes 3438925.93 bytes/sec (3.44MB/sec)
To do this test, install iperf
on the Pi ($ sudo apt-get install -y iperf
), then:
$ iperf -s
$ iperf -c 10.0.1.37
$ iperf -c 10.0.1.36
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 113 MBytes 94.4 Mbits/sec
$ iperf -c 10.0.1.35
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 53.1 MBytes 44.5 Mbits/sec
$ iperf -c 10.0.1.38
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 266 MBytes 222 Mbits/sec
It looks like, at least as far as a Raspberry Pi B+ and A+ are concerned, internal 10/100 Ethernet is more than adequate for most use cases, and other interfaces add throughput, but only for operations where other I/O is not a priority.
If network throughput is a priority, it is definitely worth investing in a 10/100/1000 Gigabit USB 3.0 adapter; even over the Raspberry Pi's USB 2.0 bus, you will see at least double the bandwidth, meaning uploads, downloads and streaming will get a LOT more bandwidth!
NFS seems to add a little network overhead, though I don't think it's as much as I'm expecting. Might be interesting—especially since this site isn't heavy on writes/reads—to see if just having an inotify-based rsync configuration or even getting extravagant and using GlusterFS would speed up a shared files directory over the existing NFS implementation.
Plus it might simplify the config in general; right now I have an NFS mount shared from the cache/Redis server, and NFS has to connect from each of the web heads.
See:
It seems switching the GPIO pin on which the ACT LED operates is as easy as setting the following inside /etc/config.txt
:
# Use a different GPIO pin for ACT LED.
dtparam=act_led_gpio=XX
...where XX
is the pin to be used.
Is there a way to set this dynamically? Or failing that, some way to control the LED using a script that monitors CPU usage or something (so we can emulate the ACT status)?
I'd like to see—for the web servers (not the balancer)—whether Nginx or Apache running as mpm_worker will load D8 faster and with more concurrency. Should be relatively simple using my Ansible Galaxy roles :)
Since I'm going to be doing a lot of testing on this platform, I want to have a really stable and easy way to get things back into a pristine condition.
Currently, Raspbian's default image weighs in at a hefty 2.5G (once fully installed and updated) according to df -h
, but I think I can trim that down considerably by removing cruft I don't need for a bunch of headless servers.
I'd like to wrap up the process in an Ansible playbook, so I can rebuild this lightweight image (and allow others to do the same) as-needed. I might even be tempted to host a lightweight zipped .img file on http://files.midwesternmac.com/, but we'll see.
There are a few guides to the process, but nothing authoritative (or updated):
It looks like this would be the process:
/boot/config.txt
values (see one of my Pis for current defaults; this should be specific to Pi revision)/home/pi/python_games
/home/pi/Desktop
/usr/share/icons/gnome/icon-theme.cache
/opt/vc/src/hello_pi/hello_video/test.h264
$ sudo apt-get remove --purge wolfram-engine
$ sudo apt-get remove --purge desktop-base lightdm lxappearance lxde-common lxde-icon-theme lxinput lxpanel lxpolkit lxrandr lxsession-edit lxshortcut lxtask lxterminal
$ sudo apt-get remove --purge obconf openbox raspberrypi-artwork xarchiver xinit xserver-xorg xserver-xorg-video-fbdev x11-utils x11-common
$ sudo apt-get remove --purge esound-common freepats sonic-pi jackd2 omxplayer
$ sudo apt-get remove --purge dillo squeak-plugins-scratch netsurf-gtk netsurf-common epiphany-browser-data fonts-droid gsfonts ruby1.9.1
/usr/share
!).$ sudo apt-get autoremove
$ sudo apt-get clean
Helpful hints:
$ sudo dpkg --get-selections | grep deinstall
(question: does this actually list everything?).$ sudo find / -type f -size +10000k -exec ls -lh {} \; | awk '{ print $NF ": " $5 }'
$ sudo du -hsx * | sort -rh | head -10
I would like to add a separate python script that accepts a few simple arguments and is run on startup (maybe via init script).
Also, maybe have 'drupal' for drupal blue as a demo for doing serial vs. clustered deployment.
According to #20, GlusterFS was a tiny bit slower than NFS, but since it offers greater redundancy (especially with 3 replicas across the 3 webservers), and doesn't require an extra server to act as the NFS server, I'm going to switch.
I have the configuration ready/working locally, just using this issue to track the commits.
Are there modules for D8 yet?
Looks like we have:
Any other methods of proactively purging things? Or will I need to dump the /var/cache/nginx/*
directory every time I deploy an update? (Or set the max page TTL way lower under /admin/config/development/performance
).
Currently the Dramble really is only configured to run the demonstration D8 codebase, which I maintain and which will likely always be a couple beta/final releases behind the latest stable D8 release.
Therefore it would be nice if there were an easy/simple way to tell the playbook, "install D8 HEAD from Drupal.org using this makefile rather than the demo codebase".
Random IO is causing iowait to be the major source of contention with almost any Drupal-related activity on the db server Pi right now.
I need to benchmark whether having the db store internal or external is faster; it seems the microSD reader might have some sort of IO advantage over external USB devices, but IO in general is fairly slow through USB 2.0, and I'm not sure what will give the best speed.
It'd be nice to have a HUD of the entire cluster's activity. Munin is lightweight, easy to get going, and can maybe run on one of the three webservers, or on the caching server maybe...
I'd ideally be able to switch the balancer between Varnish and Nginx. Maybe add a variable like dramble_balancer_software
and default it to nginx
.
I don't think we'll get much improved performance out of Varnish, but since the Drupal community still has a lot of momentum/familiarity behind Varnish (as opposed to Nginx), it would be good to allow it to be used—and to show how easy it is to switch between them using Ansible :)
See: https://github.com/wg/wrk/wiki/Installing-wrk-on-OSX
It seems to be better at slightly-more-realistic huge-load testing, but for the Pi, I don't think I'm necessarily running into any limitations with ab
.
Since this cluster is meant to run standalone, and the Pi doesn't have an RTC, I've ordered an RTC that I can stick onto one of the Pis, then set up the Pi as an NTP server.
I'll need to configure all the other Pis to use the master Pi as the NTP server, then have the master Pi update it's time periodically if connected to the Internet.
Should hopefully not be too difficult to set up; I can probably do most of it with the geerlingguy.ntp
role, and here's a guide to help as well: Use local NTP server with Raspberry Pi.
Since I'd like to compare baseline Raspberry Pi 2 performance to a 'real' infrastructure setup in the cloud, I'd like to add a DigitalOcean provisioning playbook that builds the servers with 1 GB RAM, 1 CPU, 30 GB HDD, and Debian 7, so I can deploy to them and run performance benchmarks.
Since Redis adds minimal performance improvement (for the needs of this cluster), and requires the use of a full Raspberry Pi (the way I have it configured), I'd like to remove the Redis server from the mix, and instead use it as a fourth Drupal backend server.
This would also make the general playbook configuration a little simpler, and with Drupal 8's recent huge performance improvements, I think the requests/second across the cluster will improve much more dramatically by adding another web head.
Just need to verify this, should be a simple task. It won't run on initial deploy, but after the 1.0.0 update, the module should be enabled at that point.
I think the only thing that needs to happen to make it so is adding a step to stick the Git repository on one of the servers (maybe on the balancer), then update the Drupal playbook to clone from there instead of GitHub directly.
This will make deployments during rapid development a little more annoying, but at least the entire demonstration of deploying Drupal to the Raspberry Pi Dramble will be local to the Pi and not rely on an internet connection.
There are a bunch of things about the Dramble that I'd like to document more permanently, but not in the main README. Things like:
Things like that.
Logging can quickly slow down the Pis, since every flush to disk is going to be pretty slow (writing to microSD cards == gonna be a bad day).
Logging config in nginx can be set to write to disk infrequently, but it's still going to be a small bottleneck when serving tons of traffic during a load test. I'd rather set up logstash and route logging over the network to one of the Pis. Maybe. It might be too much overhead.
Another possibility is just dropping logging altogether. Annoying for debugging... but we're just interested in getting the infrastructure running—and running fast.
It seems there are two bottlenecks in the playbook runs right now: first, the install step, which takes maybe 1-2 minutes, and second, the configuration update step on the second deploy.
I'd like to see if enabling APC on cli speeds up drush tasks in any perceivable way, since it seems PHP was consuming 60-95% of the CPU during some parts of both processes. Since the database seems to be humming along nicely, I think PHP is one of the only areas where I can make decent gains at this point.
For local testing and comparison purposes, I'd like to add a Vagrant configuration to build at least most of the servers required (maybe just two webservers instead of three, to conserve RAM/CPU), and run the playbooks.
Also, I'll add it in a testing/
subdirectory along with perhaps a configuration for deploying everything to DigitalOcean and/or AWS.
Currently, because the Redis module isn't installed by default, I have to start things with the following line commented in settings.php:
// $settings['cache']['default'] = 'cache.backend.redis';
Only after Drupal is installed and the Redis module is enabled can I uncomment that line and redeploy settings.php. I tried adding the conditional if (\Drupal::moduleHandler()->moduleExists('redis')) {
inside settings.php, but it seems the module_handler isn't yet loaded into the container when settings.php is called, so I can't check for the module's existence. I get the error:
2015/02/26 14:42:37 [error] 3339#0: *11 FastCGI sent in stderr: "PHP message: PHP Fatal error: Call to a member function get() on a non-object in /var/www/drupal/core/lib/Drupal.php on line 446" while reading response header from upstream, client: 192.168.77.2, server: 192.168.77.3, request: "GET /user HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "pidramble.com"
Raspberry Pi-compatible (ARM) apt repos only contain PHP versions up to 5.4.x. A couple options:
geerlingguy.php
role, but would take absolutely forever on the Pi).jessie
repo sources (via pinning).x64
architectures, but worth a try.I need to create a sister issue in the drupal-pi
project, since this 5.5+ will soon become a hard requirement for Drupal 8...
So it can take on a life of its own. Easier/faster to iterate on just one Pi anyways :)
There are a few niggles I'm still working through in terms of config dump for the demo-drupal-8
project:
Check out whether Redis or Memcached is faster on the Pi cluster. Tradition and small memory capacity seems to indicate that Memcached would be the better option for this particular project... but I haven't benchmarked Redis in a while, and it might make things simpler if I just have 1 Redis server and 2 Web servers, instead of distributing Memcached across all the webservers... (see related: #1).
It'd be nice to have a quick overview of how much computing power/resources are present in a 5-node Raspberry Pi 2 model B cluster :)
Check into whether MariaDB might be simple enough/feasible on the Pi. I've done a lot of work with MySQL 5.5 on the Pi, and it runs well enough, but can I just as easily install MariaDB, even using the geerlingguy.mysql
role? Or maybe create a new mariadb
role?
Use the geerlingguy.redis
Ansible role.
From some informal testing, it looks like wrk
might be a little more lightweight and accurate than ab
for local-network tests. Plus the summary is much more concise, which I like :)
For example:
$ wrk -t12 -c200 -d30 http://pidramble.com/about
Running 30s test @ http://pidramble.com/about
12 threads and 200 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.37s 6.33s 16.36s 79.36%
Req/Sec 417.69 650.51 5.44k 92.71%
78676 requests in 30.01s, 516.52MB read
Socket errors: connect 0, read 0, write 0, timeout 652
Requests/sec: 2622.02
Transfer/sec: 17.21MB
I need to:
See if it can do things as quickly (or close to as quickly) as the internal microSD card. If so, might be worth using that instead of the external USB 3.0 SSD which I'm currently using (which is measurably faster, but pretty bulky.
This is the flash drive in question: SanDisk Ultra Fit USB 3.0 16GB Flash Drive.
I've been working on one in LucidChart, and it will look pretty decent, I hope.
Debian comes with 5.4 by default; I might get around to testing 5.5/opcode caching separately, but I just want to turn on APC, and turn it off, and see how that impacts things.
Currently it looks like php5-fpm is using a pm.max-children limit of 5 (is that the default? We don't have anything configured in /etc/php5/fpm/php-fpm.conf
).
This means the Raspberry Pi's 4-core CPU seems to max out around 250% PHP processes... and 50% for Gluster.
We should be able to go up to 6 processes (which could probably push through 18-20 req/second, up from the current maximum of ~14/second), while still leaving a little overhead for Gluster and system processes.
Currently my configurations don't use gzip, and I've been pushing through around 1500 cached req/s, or 10MB/sec in bandwidth. I wonder if we can eke out a little more than that with gzip enabled.
There are a number of things I need to document more formally elsewhere (to save my future self and others time in hardware-related setup with the Pi and accessories)... but for now, I'll just add them here, in this issue.
I bought a decent, name-brand 6-port 2A-per-port (except for a couple) power supply to power all the Pis in the Dramble. I could've also gone with a PC 5V PSU and hacked the power to the Pi through GPIO pins (a lot of cluster builds do this, since it's easier to custom wire and doesn't require a mess of micro USB cables).
But either way, you need to get clean, 2A or greater power to the Pi, or you're going to run into weird issues, like random restarts, USB device issues, network flakiness, etc.
If you ever run into strange issues with your Pi, check your power supply. For other Pis I've been using Samsung and Apple 2A chargers (like the one that comes with an iPad), and they have worked great for a couple years!
I have an SSD that seems to require something like 300-500mA of current to function properly. Mix that with a 40 mA USB keyboard and a 100-200mA WiFi dongle, and the default 600mA supplied over the Pi's bus is a bit cramped. To prevent plugging in of certain medium-high powered USB devices from crashing your Pi or taking down other USB devices, there's a /boot/config.txt
parameter that allows you to double the default current on USB. To enable this mode:
$ sudo nano /boot/config.txt
max_usb_current=1
and save the file.This hack is only available on the Raspberry Pi B+ and later Pis (like the Pi 2 model B). Also, I have an open question (and another) asking whether there are any real downsides to setting this value to 1
. For now, I'm only setting this value for Pis where I need to power an external HDD/SSD drive.
See this guide, which I posted to Midwestern Mac's blog: Setting up an 802.11n WiFi adapter on the Raspberry Pi. Shows how to connect using WPA Supplicant, and how to prevent the WiFi from going into standby mode.
If you accidentally break your Pi by editing the wrong file or breaking configuration somewhere on the microSD card you use to boot the Pi, you can usually just pull it and mount it on another workstation, edit the file to revert the change, and pop it back in your Pi.
I wrote a guide for mounting a Raspberry Pi's ext4-formatted microSD card in Ubuntu 14.04 on a Mac, and the process for other platforms is similar (use a VM, make it easy).
You can also re-image the entire SD card, and that's generally what I do if I've botched things too badly (easy to rebuild things when the configuration's all done in Ansible :).
See: http://buytaert.net/making-drupal-8-fly
D8 offers a lot more integration with caching logic, pumping data through the right places at the right times, etc. Maybe write about the opportunities offered by D8 with Redis, Nginx caching, etc.
To make almost everything easier... use name-based configuration and connections (e.g. www1.pidramble.com
etc.) instead of IP-based configuration. Maybe. Need to think it through further. But would make #40 a little simpler, I think.
This will be helpful information when it comes time to work on a potential inline UPS.
Some numbers:
Pi State | Power Consumption | Per Pi (average) |
---|---|---|
Idle | 170 mA (7.2W) | 28 mA (1.2W) |
400% CPU load | 230 mA (11.4W) | 38 mA (1.9W) |
400% CPU load, 1x USB 64GB SSD | 300 mA (13.6W) | 50 mA (2.3W) |
These readings were taken with the Pis in the following configuration:
stress -c 4
to max out all 4 CPU cores.I have a bunch of RGB LEDs, and I'd like to mount them to the front of the Raspberry Pi cases, one for each Pi. I'd need a small library/command line app that sets the LED into different modes (like PWM-based 'breathing' status for nothing happening, green for deployment happening, drupal blue for drupal going... that kind of thing).
I need to work on the hardware aspect (GPIO to LED and a decent/flexible mounting solution), and the software aspect (a library that I can call through Ansible to change the lights.
See the following resources for more info:
There are a few considerations here (along with the determination as to what balancing software to use—see #1):
I'm definitely leaning towards Nginx, for simplicity's sake, and since it's used all over the place. Not considering Apache really, since I haven't been impressed with it's balancing/proxying capabilities (which are getting better, but are still a bit resource-intensive).
Here are a couple links to help tease out using Nginx:
Many people have asked about installing Drupal 8 (specifically) on a single Raspberry Pi, as they're not necessarily willing to put down $400+ on a full cluster.
I'd like to add a separate playbook, maybe under the testing
directory, along with a separate README, which details setting up Drupal on a single Raspberry Pi.
As part of #17, I added the wheezy-backports apt repo to the balancer.yml
playbook. I'd like to see if, like is explained in Installing Nginx Using the Debian Wheezy Backports Package, I can install a newer version than the (relatively outdated 1.2.1) version that comes with Wheezy's core repos.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.