Giter VIP home page Giter VIP logo

parlaynu / linux-nfs-cache-demo Goto Github PK

View Code? Open in Web Editor NEW
1.0 2.0 0.0 705 KB

Demo setup of nfs client/server with an nfs cache (FS-Cache/cachefilesd) in between. Client and server sites in different VPCs/different regions, connected via wireguard VPN.

License: MIT License

Shell 8.15% HCL 78.20% Smarty 13.66%
linux nfs cachefilesd fs-cache aws terraform ubuntu ansible wireguard gcp

linux-nfs-cache-demo's Introduction

Linux NFS Cache Demo

Architecture

architecture diagram

Testing Results

General operational results are here.

Tests of failure scenarios are here.

Build Steps

  • satisfy prerequisites
  • build the cloud infrastructure (with terraform)
  • configure the servers (with ansible)
  • setup grafana dashboard to monitor
  • create test files on nfs-server
  • run tests on nfs-client

Prerequisites

The core requirements are:

For AWS:

  • follow the getting started guide here

For GCP:

  • follow the getting started guide here

Building Cloud Infrastructure

Generate The Wireguard Keys

From the scripts directory, run this command to generate the keys:

./make-keys.sh

It will produce an output like this which you can copy and paste into the terraform vars file.

NOTE: these aren't valid keys.

server keys
    vpn_private_key = "YMAlCdUgLpVfRwXVI6RXR937YkHYAC2hrtAnheBWQ="
    vpn_public_key  = "GJXpV6tVJBR55CN9Hdv3BI6CE2wRfxc5mvTZ6aTlE="
client keys
    vpn_private_key = "kGNAGM61jLOoi0CDyegii6BrQMzsc0XBZNYIjBbVo="
    vpn_public_key  = "3zrNQ/MHP99CnLK+DFSceiysM/cM0i3F76pSl8fkw="

Prepare Terraform

From within the terraform directory (terraform-aws or terraform-gcp), copy the file terraform.tfvars.examples to terraform.tfvars and customize to your environment.

For both environments:

  • update the server-site variable:
    • set the region and zone to where you want the server to run
    • update the vpn_private_key and vpn_public_key values with the server keys generated by make-keys.sh
  • update the client-site variable:
    • set the region and zone to where you want the client to run (needs to be different to the server)
    • update the vpn_private_key and vpn_public_key values with the client keys generated by make-keys.sh

For AWS:

  • update the aws-profile variable if you're using a profile other than default
  • make sure that the ami_for_region map has entries for the regions you are using
    • this should be for the Ubuntu 2022 LTS 64bit

For GCP:

  • set the project_id
  • set the credentials_file variable to point to the location of your GCP credentials file
  • set your username within GCP

Run Terraform

From within the terraform directory, run these commands:

terraform init
terraform apply

Configure Servers

From within the terraform directory, run the ansible wrapper script:

./local/ansible/run-ansible

Setup Grafana Dashboards

Setup port forwarding from your local machine to the metrics server, by running the below from within the terraform directory:

ssh -F local/ssh.cfg metrics-ports

Browse to http://localhost:3000 and you will be connected to the grafana instance running in the client VPC.

Create a datasource to the prometheus instance running on the same machine: http://localhost:9090.

Create a dashboard to monitor network and disk IO on nfs-server, nfs-cache and nfs-client. See the graphs in the test performance document for some examples.

Create Test Files

From within the terraform directory, log into the nfs-server:

ssh -F local/ssh.cfg nfs-server

The helper tools were built as part of the ansible run and are available in the bin directory. Create the test files using:

mktestfiles -n 6 /shows/test 60

This uses 6 goroutines to create files in the /shows/test directory and fill it to 60%.

The files are named after the sha256 hash of their content. This is used by the read routines to verify the contents of the files when testing.

Run Tests

From within the terraform directory, log into the nfs-client:

ssh -F local/ssh.cfg nfs-client

To read all the files in the nfs mount point, run:

fsreadall -n 4 /shows/test

This will read all the files through the nfs-cache and save the contents in the local cache directory. It also verifies the sha256 content has of the file matches the file name.

Run it a second time and it will read the files from the cache, not the nfs-server. This will run in much less time than the first run.

Other tests you can run include:

fsreadrandom -n 6 /shows/test 30m

This will randomly read ranges from within randomly selected files within the test area for 30m. You can specify the time to run for as documented here.

fsreadwrite -n 6 /shows/test 30m

This randomly selects one of three actions to perform:

  • read the full range of a file (and verify the sha256 content hash)
  • read a random section from within the file
  • create a new file on the server and delete the original

linux-nfs-cache-demo's People

Stargazers

gema-arta avatar

Watchers

Paul Ryan avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.