Giter VIP home page Giter VIP logo

dashboard.sitespeed.io's Introduction

Tests running dashboard.sitespeed.io

Build status

This is a working example of how you can use sitespeed.io to monitor the performance of your web site. The code run on an instance on Digital Ocean and send the metrics to dashboard.sitespeed.io (that is setup using our docker-compose file and configured for production usage).

You should use this repository as an example of what you can setup yourself. The idea is to make it easy to setup, easy to add new URLs to test and easy to add a new user journey. You start the a script (loop.sh) on your server that runs forever but for each iteration, it runs git pull and update the scripts so that if you add new URLs to test, they are automatically picked up.

You can check out the full documentation at our documentation site.

Do you want to add a new URL to test on desktop? Navigate to desktop and create your new file there. Want to add a user journey? Add the script in the same place and name then .js.

Our example run tests for desktop, emulated mobile (both URLs and scripts).

The structure looks like this:

.
├── README.md
├── config
│   ├── README.md
│   ├── alexaDesktop.json
│   ├── alexaMobile.json
│   ├── crux.json
│   ├── desktop.json
│   ├── desktopMulti.json
│   ├── emulatedMobile.json
│   ├── emulatedMobileMulti.json
│   ├── loginWikipedia.json
│   ├── news.json
│   ├── replay.json
│   └── spa.json
├── loop.sh
├── run.sh
└── tests
    ├── desktop
    │   ├── alexaDesktop.txt
    │   ├── crux.txt
    │   ├── desktop.txt
    │   ├── desktopMulti.js
    │   ├── loginWikipedia.js
    │   ├── news.txt
    │   ├── replay.replay
    │   └── spa.js
    └── emulatedMobile
        ├── alexaMobile.txt
        ├── emulatedMobile.txt
        └── emulatedMobileMulti.js
        

The loop.sh is the start point. Run it. That script will git pull the repo for every iteration and run the script run.sh.

Then run.sh will use the right configuration in /config/ and run the URLs/scripts that are configured. Our configuration files extends configuration files that only exits on the server where we hold secret information like username and passwords. You don't need set it up that way, if you use a private git repo.

Install

Run your tests on a Linux machine. You will need Docker and Git. You can follow Dockers official documentation or follow our instructions:

# Update
sudo apt-get update
sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common -y

## Add official key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

## Add repo
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

sudo apt-get update

# Install docker
sudo apt-get install docker-ce docker-ce-cli containerd.io -y

You also need git on your server:

sudo apt-get install git -y

Then depending on where you run your tests, you wanna setup the firewall.

Your server now ready for testing.

Setup

On our server we clone this repo but you should clone your own :)

git clone https://github.com/sitespeedio/dashboard.sitespeed.io.git

On our server we have two configuration files that only exits on that server, that's where we have the secrets. They look like this:

/conf/secrets.json

{
  "graphite": {
    "host": "OUR_HOST",
    "auth": "THE_AUTH",
    "annotationScreenshot": true
  },
  "slack": {
    "hookUrl": "https://hooks.slack.com/services/THE/SECRET/S"
  },
  "resultBaseURL": "https://s3.amazonaws.com/results.sitespeed.io",
  "s3": {
    "key": "S3_KEY",
    "secret": "S3_SECRET",
    "bucketname": "BUCKET_NAME",
    "removeLocalResult": true
  },
  "browsertime": {
    "wikipedia": {
      "user": "username",
      "password": "password"
    }
  }

}

Run

Go into the directory that where you cloned the directory: cd dashboard.sitespeed.io And then start: nohup ./loop.sh &

To verify that everything works you should tail the log: tail -f /tmp/sitespeed.io.log

Stop your tests

Starting your test creates a file named sitespeed.run in your current folder. The script on the server will continue to run forever until you remove the control file: rm sitespeed.run

The script will then stop when it has finished the current run(s).

Start on reboot

Sometimes your cloud server reboots. To make sure it auto start your tests, you can add it to the crontab. Edit the crontab with crontab -e and add (make sure to change the path to your installation and the server name):

@reboot rm /root/dashboard.sitespeed.io/sitespeed.run;cd /root/dashboard.sitespeed.io/ && ./loop.sh

dashboard.sitespeed.io's People

Contributors

mamercad avatar ogarling avatar pal avatar sburnicki avatar soulgalore avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

dashboard.sitespeed.io's Issues

Testing multiple URLs but results are stored in the same folder

Hi, I have just started to learn how to use sitespeed.io for my projects and I am able to setup the grafana in Digital Ocean. While I am testing this script to continuously run the test, I meet some issues.

  1. I am using exactly the same settings in this repo and only modify the necessary like graphite host/auth, URLs to test in desktop.txt and etc.
  2. I am testing multiple pages in multiple domains.
    Example:
    abc.com
    abc.com/products
    abc.com/about-us
    def.com
    def.com/products
    def.com/about-us
  3. I am assuming the results will be stored in separate folder under abc.com/def.com folder. But it is currently stored as following:
    /sitespeed-result/abc.com/[DATE]/pages/abc.com
    /sitespeed-result/abc.com/[DATE]/pages/def.com

May I know if this can be configured to do as below?
/sitespeed-result/abc.com/[DATE]/
/sitespeed-result/def.com/[DATE]/

Empty leaderboard & wikipedia login dashboards

i'm running this on my linux server - it does run the test properly and i can see the data getting logged into graphite.

but from grafana - only the pagmetrics dashboard is populated with data , other dashboards like leaderboard & wikipedia login are empty.

my secrets.json looks something like this:

{
"graphite": {
"host": "xxxxxxxxxxx",
"port": "xxxxxxxxx",
"auth": "root:root",
"annotationScreenshot": true,
},
"resultBaseURL": "https://xxxxxxxxxx",
"s3": {
"key": "xxxxxxxxxxxx",
"secret": "xxxxxxxxxxxx",
"bucketname": "xxxxxxxxxxx",
"copyLatestFilestoBase": true,
"removeLocalResult": true
},
"browsertime": {
"wikipedia": {
"user": "xxxxxxxxxxxx",
"password": "xxxxxxxxxxxx"
}
}
}

i don't know if I'm missing something from the config or should tinker around the variables for leaderboard dashboard to get it working, Any help is appreciated.

Different hosts are in the same report folder

I am trying to analyze 2 different domains, approximately 5-8 pages from each.
I edited file "desktop.txt" and included all of the URLs, file sample here:
desktop.txt

So, finally file structure looks like: "first_domain_in_desktop.txt/pages/BOTH_DOMAINS"
111
222

Test timings

Hi there,

Just looking into this, and wondering - how do we set it up so it runs the tests once a day (for example)?

Thanks!

Resolving host while using txt-file

Dear sitespeed-team,
dear @soulgalore ,

we use the sitespeed dashboard to measure the response times of a website.
With the dashboard it is possible to check the website via the txt file and the Json file.

The code in the "run.sh" file looks like that

for url in tests/$TEST/desktop/urls/.txt ; do
[ -e "$url" ] || continue
for browser in "${BROWSERS[@]}" ; do
POTENTIAL_CONFIG="./config/$(basename ${url%%.
}).json"
[[ -f "$POTENTIAL_CONFIG" ]] && CONFIG_FILE="$(basename ${url%.}).json" || CONFIG_FILE="desktopWithExtras.json"
NAMESPACE="--graphite.namespace sitespeed_io.$(basename ${url%%.
})"
docker run $DOCKER_SETUP $DOCKER_CONTAINER $NAMESPACE CONFIG/$CONFIG_FILE -b $browser $url
control
done
done

The file contains a URL on each line.

However, if we test the website in this way, we get a response time that does not match the perceived website call at all.
In order to find out why time is so different, we have activated the recording of a video in the dashboard as follows:

file "config/desktop.json"
{
"extends": "/config/secrets.json",
"browsertime": {
"connectivity": {
"engine": "external",
"profile": "native"
},
"visualMetric": true,
"visualElements": true,
"iterations": 1,
"browser": "chrome",
"video": true,
"chrome": {
"timeline": true,
"collectConsoleLog": false,
"enableTraceScreenshots": false
},
"firefox": {
"includeResponseBodies": "all",
"disableBrowsertimeExtension": true,
"windowRecorder": false,
"geckoProfiler": true
},
"pageLoadStrategy": "none",
"pageCompleteWaitTime": 0,
"pageCompleteCheckStartWait": 0,
"pageCompleteCheckPollTimeout": 0
},
"cpu": false,
"gzipHAR": false,
"html": {
"fetchHARFiles": true,
"showScript": false,
"compareURL": "https://compare.sitespeed.io/"
},
"plugins": {
"remove": ["axe","sustainable"]
},
"screenshot": {
"type": "jpg"
}
}

The videos are also created. However, if we look at the videos from this URL, we see that "resolving host" is displayed at the bottom left for a very long time when the website is opened. This also explains why the call takes so long.

Since we cannot explain this, we also called the URL via the Json file.
The content of the Json file looks like this:

module.exports = async function(context, commands) {
await commands.measure.start('https://URL-here/', 'Landingpage');
return await commands.screenshot.take('screen_website');
};

However, these videos from this call are only with white content.

Why does it take so long to access the website with the txt file (resolving host problem)?

We would be really happy if you could help us. We would like to continue using sitespeed, but we cannot explain the times.

Thanks your
Danny

Sample contents of `/config/crux.json`

config/crux.json in the respository extends /config/crux.json. Can you share sample contents for /config/crux.json? Is it same as /config/secrets.json?

Unable to update the results to Grafana in aws

Hi,

Sorry if I asked this question in wrong page.
I was trying to run this form my aws ec2.

I followed all the instruction from https://www.sitespeed.io/documentation/sitespeed.io/performance-dashboard/#up-and-running-in-almost-5-minutes following a video form youtube https://www.youtube.com/watch?v=1hGqucegbiE.

Every steps works under "Up and running in (almost) 5 minutes" , and after 3rd step my test results did not list in my aws grafana Dashboard. I ran alot of site using the step 3, but none of the site showed up in aws Dashboard, it just still shows the default Dashboard..

Command ran in aws linux: docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:13.3.2 --graphite.host=host.docker.internal https://www.mytestsitessss.com

Will there be any missing steps which I might have missed..plz help....

Compare two image

Hello sitespeed-team,

We are using the dashbord to measure performance data from our website.
I created a js-File in

/opt/sitespeedio/dashboard/tests//desktop/scripts

with the following content

module.exports = async function(context, commands) {
  // We start by navigating to the login page.
  await commands.navigate(
    'https://<URL>'
  );

  // When we fill in a input field/click on a link we wanna
  // try/catch that if the HTML on the page changes in the feature
  // sitespeed.io will automatically log the error in a user friendly
  // way, and the error will be re-thrown so you can act on it.
  try {
    // Add text into an input field, finding the field by id
    try {
         await commands.addText.byId('<USER>', 'principal_name');
         await commands.addText.byId('<PASSWORD>', 'password');

         // Find the sumbit button and click it and then wait
         // for the pageCompleteCheck to finish
         await commands.click.byIdAndWait('submit');
         await commands.wait.byTime(2000);
         await commands.screenshot.take('screen_login_success');
    } catch (e) {
         commands.error('Add value not possible');
         await commands.screenshot.take('screen_login_failed');
    }
    // Measure the site page as a logged in user

// click to open groupware
    await commands.measure.start('Loading groupware');
    try {
       await commands.click.byXpathAndWait('/html/body/div[1]/div[2]/div[1]/div[3]/a');
       await commands.screenshot.take('screen_groupware');
    } catch (e) {
         commands.error('groupware not working');
         await commands.screenshot.take('screen_groupware_failed');
    }
    await commands.measure.stop();


// Logout         
    await commands.screenshot.take('screen_before_logout');
    try {
      await commands.click.byXpathAndWait('/html/body/div[1]/div[2]/div[3]/div[2]/a');
      await commands.screenshot.take('screen_success_logout');
    } catch (e) {
      commands.error('Logout not possible');
      await commands.screenshot.take('screen_logout');
    }
    return await commands.screenshot.take('screen_after_last_stop');
  } catch (e) {
    context.log.error(e);
    await commands.screenshot.take('fail_complete');
    // We try/catch so we will catch if the the input fields can't be found
    // The error is automatically logged in Browsertime and re-thrown here
  }
};

As you seen we take a screenshot after every commands.click.byXpathAndWait command.
Now I would like to know if it's possible to compare this created screenshot with another screenshot to find out if the site is available and measure what we expected?
If not a error message should be issued.

We want to make sure that the page is as we expected. So if it's not possible to compare the screenshot/jpeg files we could also imagine that a string is compared.

Or is there another way to find out if a page is unavailable?

Thanks you very much in advance.

Disable favicon

Dear sitespeed-team,
we are using sitespeed dashboard to monitoring and analyze some of our websites.
I added some lines of our website in the file

../tests/FOLDERNAME/desktop/urls/startpage.txt

One every run I get an following error in /tmp/.log

Google Chrome 80.0.3987.132
Mozilla Firefox 74.0
[2020-04-01 13:06:36] INFO: Versions OS: linux 4.15.0-72-generic nodejs: v12.13.0 sitespeed.io: 12.3.1 browsertime: 8.3.1 coach: 5.0.1
[2020-04-01 13:06:36] INFO: Running tests using Chrome - 1 iteration(s)
[2020-04-01 13:06:38] INFO: Testing url URL iteration 1
[2020-04-01 13:07:37] INFO: URL 20 requests, backEndTime: 8.18s, firstPaint: 16.32s, firstVisualChange: 16.33s, DOMContentLoaded: 16.36s, Load: 24.28s, speedIndex: 16414, perceptualSpeedIndex: 16681, contentfulSpeedIndex: 16337, visualComplete85: 16.33s, lastVisualChange: 24.33s, rumSpeedIndex: 16505
[2020-04-01 13:07:38] INFO: The server responded with a 404 status code for https://URL/favicon.ico
[2020-04-01 13:07:41] INFO: HTML stored in /sitespeed.io/sitespeed-result/URL/2020-04-01-13-06-36
./sitespeed.run found. Make another run ..

I also getting the INFO message when I'm using javascript.

Now my first question: Is the absence of the favicon also included in the calculation of the time?
Secound question: It is possible to disable downloading the favicon?

Thank you very much

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.