Giter VIP home page Giter VIP logo

googlechrome / lighthouse Goto Github PK

View Code? Open in Web Editor NEW
27.8K 594.0 9.3K 224.17 MB

Automated auditing, performance metrics, and best practices for the web.

Home Page: https://developer.chrome.com/docs/lighthouse/overview/

License: Apache License 2.0

JavaScript 94.88% Shell 0.88% HTML 0.82% CSS 1.72% Python 0.03% Handlebars 0.03% TypeScript 1.64%
audit best-practices performance-metrics pwa web performance-analysis developer-tools chrome-devtools

lighthouse's Introduction

Lighthouse GitHub Actions Status Badge GitHub Actions Status Badge GitHub Actions Status Badge Coverage Status Build tracker for Lighthouse NPM lighthouse package

Lighthouse analyzes web apps and web pages, collecting modern performance metrics and insights on developer best practices.

Using Lighthouse in Chrome DevTools

Lighthouse is integrated directly into the Chrome DevTools, under the "Lighthouse" panel.

Installation: install Chrome.

Run it: open Chrome DevTools, select the Lighthouse panel, and hit "Generate report".

Lighthouse integration in Chrome DevTools.

Using the Chrome extension

The Chrome extension was available prior to Lighthouse being available in Chrome Developer Tools, and offers similar functionality.

Installation: install the extension from the Chrome Web Store.

Run it: follow the extension quick-start guide.

Using the Node CLI

The Node CLI provides the most flexibility in how Lighthouse runs can be configured and reported. Users who want more advanced usage, or want to run Lighthouse in an automated fashion should use the Node CLI.

Note Lighthouse requires Node 18 LTS (18.x) or later.

Installation:

npm install -g lighthouse
# or use yarn:
# yarn global add lighthouse

Run it: lighthouse https://airhorner.com/

By default, Lighthouse writes the report to an HTML file. You can control the output format by passing flags.

CLI options

$ lighthouse --help

lighthouse <url> <options>

Logging:
  --verbose  Displays verbose logging  [boolean] [default: false]
  --quiet    Displays no progress, debug logs, or errors  [boolean] [default: false]

Configuration:
  --save-assets                  Save the trace contents & devtools logs to disk  [boolean] [default: false]
  --list-all-audits              Prints a list of all available audits and exits  [boolean] [default: false]
  --list-trace-categories        Prints a list of all required trace categories and exits  [boolean] [default: false]
  --additional-trace-categories  Additional categories to capture with the trace (comma-delimited).  [string]
  --config-path                  The path to the config JSON.
                                 An example config file: core/config/lr-desktop-config.js  [string]
  --preset                       Use a built-in configuration.
                                 WARNING: If the --config-path flag is provided, this preset will be ignored.  [string] [choices: "perf", "experimental", "desktop"]
  --chrome-flags                 Custom flags to pass to Chrome (space-delimited). For a full list of flags, see https://bit.ly/chrome-flags
                                 Additionally, use the CHROME_PATH environment variable to use a specific Chrome binary. Requires Chromium version 66.0 or later. If omitted, any detected Chrome Canary or Chrome stable will be used.  [string] [default: ""]
  --port                         The port to use for the debugging protocol. Use 0 for a random port  [number] [default: 0]
  --hostname                     The hostname to use for the debugging protocol.  [string] [default: "localhost"]
  --form-factor                  Determines how performance metrics are scored and if mobile-only audits are skipped. For desktop, --preset=desktop instead.  [string] [choices: "mobile", "desktop"]
  --screenEmulation              Sets screen emulation parameters. See also --preset. Use --screenEmulation.disabled to disable. Otherwise set these 4 parameters individually: --screenEmulation.mobile --screenEmulation.width=360 --screenEmulation.height=640 --screenEmulation.deviceScaleFactor=2
  --emulatedUserAgent            Sets useragent emulation  [string]
  --max-wait-for-load            The timeout (in milliseconds) to wait before the page is considered done loading and the run should continue. WARNING: Very high values can lead to large traces and instability  [number]
  --enable-error-reporting       Enables error reporting, overriding any saved preference. --no-enable-error-reporting will do the opposite. More: https://github.com/GoogleChrome/lighthouse/blob/main/docs/error-reporting.md  [boolean]
  --gather-mode, -G              Collect artifacts from a connected browser and save to disk. (Artifacts folder path may optionally be provided). If audit-mode is not also enabled, the run will quit early.
  --audit-mode, -A               Process saved artifacts from disk. (Artifacts folder path may be provided, otherwise defaults to ./latest-run/)
  --only-audits                  Only run the specified audits  [array]
  --only-categories              Only run the specified categories. Available categories: accessibility, best-practices, performance, seo  [array]
  --skip-audits                  Run everything except these audits  [array]
  --budget-path                  The path to the budget.json file for LightWallet.  [string]
  --disable-full-page-screenshot Disables collection of the full page screenshot, which can be quite large  [boolean]

Output:
  --output       Reporter for the results, supports multiple values. choices: "json", "html", "csv"  [array] [default: ["html"]]
  --output-path  The file path to output the results. Use 'stdout' to write to stdout.
                   If using JSON output, default is stdout.
                   If using HTML or CSV output, default is a file in the working directory with a name based on the test URL and date.
                   If using multiple outputs, --output-path is appended with the standard extension for each output type. "reports/my-run" -> "reports/my-run.report.html", "reports/my-run.report.json", etc.
                   Example: --output-path=./lighthouse-results.html  [string]
  --view         Open HTML report in your browser  [boolean] [default: false]

Options:
  --version                            Show version number  [boolean]
  --help                               Show help  [boolean]
  --cli-flags-path                     The path to a JSON file that contains the desired CLI flags to apply. Flags specified at the command line will still override the file-based ones.
  --locale                             The locale/language the report should be formatted in
  --blocked-url-patterns               Block any network requests to the specified URL patterns  [array]
  --disable-storage-reset              Disable clearing the browser cache and other storage APIs before a run  [boolean]
  --throttling-method                  Controls throttling method  [string] [choices: "devtools", "provided", "simulate"]
  --throttling
  --throttling.rttMs                   Controls simulated network RTT (TCP layer)
  --throttling.throughputKbps          Controls simulated network download throughput
  --throttling.requestLatencyMs        Controls emulated network RTT (HTTP layer)
  --throttling.downloadThroughputKbps  Controls emulated network download throughput
  --throttling.uploadThroughputKbps    Controls emulated network upload throughput
  --throttling.cpuSlowdownMultiplier   Controls simulated + emulated CPU throttling
  --extra-headers                      Set extra HTTP Headers to pass with request
  --precomputed-lantern-data-path      Path to the file where lantern simulation data should be read from, overwriting the lantern observed estimates for RTT and server latency.  [string]
  --lantern-data-output-path           Path to the file where lantern simulation data should be written to, can be used in a future run with the `precomputed-lantern-data-path` flag.  [string]
  --plugins                            Run the specified plugins  [array]
  --channel  [string] [default: "cli"]
  --chrome-ignore-default-flags  [boolean] [default: false]

Examples:
  lighthouse <url> --view                                                                          Opens the HTML report in a browser after the run completes
  lighthouse <url> --config-path=./myconfig.js                                                     Runs Lighthouse with your own configuration: custom audits, report generation, etc.
  lighthouse <url> --output=json --output-path=./report.json --save-assets                         Save trace, screenshots, and named JSON report.
  lighthouse <url> --screenEmulation.disabled --throttling-method=provided --no-emulatedUserAgent  Disable device emulation and all throttling
  lighthouse <url> --chrome-flags="--window-size=412,660"                                          Launch Chrome with a specific window size
  lighthouse <url> --quiet --chrome-flags="--headless"                                             Launch Headless Chrome, turn off logging
  lighthouse <url> --extra-headers "{\"Cookie\":\"monster=blue\", \"x-men\":\"wolverine\"}"        Stringify'd JSON HTTP Header key/value pairs to send in requests
  lighthouse <url> --extra-headers=./path/to/file.json                                             Path to JSON file of HTTP Header key/value pairs to send in requests
  lighthouse <url> --only-categories=performance,seo                                               Only run the specified categories. Available categories: accessibility, best-practices, performance, seo

For more information on Lighthouse, see https://developers.google.com/web/tools/lighthouse/.
Output Examples
lighthouse
# saves `./<HOST>_<DATE>.report.html`

lighthouse --output json
# json output sent to stdout

lighthouse --output html --output-path ./report.html
# saves `./report.html`

# NOTE: specifying an output path with multiple formats ignores your specified extension for *ALL* formats
lighthouse --output json --output html --output-path ./myfile.json
# saves `./myfile.report.json` and `./myfile.report.html`

lighthouse --output json --output html
# saves `./<HOST>_<DATE>.report.json` and `./<HOST>_<DATE>.report.html`

lighthouse --output-path=~/mydir/foo.out --save-assets
# saves `~/mydir/foo.report.html`
# saves `~/mydir/foo-0.trace.json` and `~/mydir/foo-0.devtoolslog.json`

lighthouse --output-path=./report.json --output json
# saves `./report.json`
Lifecycle Examples

You can run a subset of Lighthouse's lifecycle if desired via the --gather-mode (-G) and --audit-mode (-A) CLI flags.

lighthouse http://example.com -G
# launches browser, collects artifacts, saves them to disk (in `./latest-run/`) and quits

lighthouse http://example.com -A
# skips browser interaction, loads artifacts from disk (in `./latest-run/`), runs audits on them, generates report

lighthouse http://example.com -GA
# Normal gather + audit run, but also saves collected artifacts to disk for subsequent -A runs.


# You can optionally provide a custom folder destination to -G/-A/-GA. Without a value, the default will be `$PWD/latest-run`.
lighthouse -GA=./gmailartifacts https://gmail.com

Notes on Error Reporting

The first time you run the CLI you will be prompted with a message asking you if Lighthouse can anonymously report runtime exceptions. The Lighthouse team uses this information to detect new bugs and avoid regressions. Opting out will not affect your ability to use Lighthouse in any way. Learn more.

Using the Node module

You can also use Lighthouse programmatically with the Node module.

Read Using Lighthouse programmatically for help getting started.
Read Lighthouse Configuration to learn more about the configuration options available.

Viewing a report

Lighthouse can produce a report as JSON or HTML.

HTML report:

Lighthouse example audit

Online Viewer

Running Lighthouse with the --output=json flag generates a JSON dump of the run. You can view this report online by visiting https://googlechrome.github.io/lighthouse/viewer/ and dragging the file onto the app. You can also use the "Export" button from the top of any Lighthouse HTML report and open the report in the Lighthouse Viewer.

In the Viewer, reports can be shared by clicking the share icon in the top right corner and signing in to GitHub.

Note: shared reports are stashed as a secret Gist in GitHub, under your account.

Docs & Recipes

Useful documentation, examples, and recipes to get you started.

Docs

Recipes

Videos

The session from Google I/O 2018 covers the new performance engine, upcoming Lighthouse REST API, and using the Chrome UX report to evaluate real-user data.

Watch the Lighthouse @ Google I/O 2018 session.

The session from Google I/O 2017 covers architecture, writing custom audits, GitHub/Travis/CI integration, headless Chrome, and more:

Watch the Lighthouse @ Google I/O 2017 session.

Click the image to watch the video on YouTube.

Develop

Read on for the basics of hacking on Lighthouse. Also, see Contributing for detailed information.

Setup

# yarn should be installed first

git clone https://github.com/GoogleChrome/lighthouse

cd lighthouse
yarn
yarn build-all

Run

node cli http://example.com
# append --chrome-flags="--no-sandbox --headless --disable-gpu" if you run into problems connecting to Chrome

Getting started tip: node --inspect-brk cli http://example.com to open up Chrome DevTools and step through the entire app. See Debugging Node.js with Chrome DevTools for more info.

Tests

# lint and test all files
yarn test

# run all unit tests
yarn unit

# run a given unit test (e.g. core/test/audits/byte-efficiency/uses-long-cache-ttl-test.js)
yarn mocha uses-long-cache-ttl

# watch for file changes and run tests
#   Requires http://entrproject.org : brew install entr
yarn watch

## run linting, unit, and smoke tests separately
yarn lint
yarn unit
yarn smoke

## run tsc compiler
yarn type-check

Docs

Some of our docs have tests that run only in CI by default. To modify our documentation, you'll need to run yarn build-pack && yarn test-docs locally to make sure they pass.

Additional Dependencies

  • brew install jq

Lighthouse Integrations in Web Perf services

This section details services that have integrated Lighthouse data. If you're working on a cool project integrating Lighthouse and would like to be featured here, file an issue to this repo or tweet at us @_____lighthouse!

  • Web Page Test — An open source tool for measuring and analyzing the performance of web pages on real devices. Users can choose to produce a Lighthouse report alongside the analysis of WebPageTest results.

  • HTTPArchive - HTTPArchive tracks how the web is built by crawling 500k pages with Web Page Test, including Lighthouse results, and stores the information in BigQuery where it is publicly available.

  • Calibre - Calibre is a comprehensive performance monitoring platform running on Lighthouse. See the performance impact of your work before it hits production with GitHub Pull Request Reviews. Track the impact of Third Party scripts. Automate your performance system with a developer-first Node.js API. Try Calibre with a free 15-day trial.

  • DebugBear - DebugBear is a website monitoring tool based on Lighthouse. See how your scores and metrics changed over time, with a focus on understanding what caused each change. DebugBear is a paid product with a free 30-day trial.

  • Treo - Treo is Lighthouse as a Service. It provides regression testing, geographical regions, custom networks, and integrations with GitHub & Slack. Treo is a paid product with plans for solo-developers and teams.

  • PageVitals - PageVitals combines Lighthouse, CrUX and field testing to monitor the performance of websites. See how your website performs over time and get alerted if it gets too slow. Drill down and find the real cause of any performance issue. PageVitals is a paid product with a free 14-day trial.

  • Alertdesk - Alertdesk is based on Lighthouse and helps you to keep track of your site’s quality & performance. Run daily quality & performance tests from both Mobile and Desktop and dive into the powerful & intuitive reports. You can also monitor your uptime (every minute - 24/7) & domain health. Alertdesk is a paid product with a free 14-day trial.

  • Screpy - Screpy is a web analysis tool that can analyze all pages of your websites in one dashboard and monitor them with your team. It's powered by Lighthouse and it also includes some different analysis tools (SERP, W3C, Uptime, etc). Screpy has free and paid plans.

  • Siteimprove Performance — Siteimprove Performance is a web Performance monitoring solution that enables a marketer, manager or decision maker to understand and optimize website load times. Get easy-to-use insights with a focus on quick and impactful wins. Siteimprove Performance is a paid product with a free 14-day trial.

  • SpeedCurve — SpeedCurve is a tool for continuously monitoring web performance across different browsers, devices, and regions. It can aggregate any metric including Lighthouse scores across multiple pages and sites, and allows you to set performance budgets with Slack or email alerts. SpeedCurve is a paid product with a free 30-day trial.

  • Foo - Lighthouse-as-a-service offering free and premium plans. Provides monitoring and historical reporting of Lighthouse audits with CircleCI, GitHub, and other integrations. Features include Slack notifications, PR comment reporting and more.

  • Apdex - Apdex is a website performance service. The main features are historical Lighthouse report visualizations, mobile/desktop options, alerts, uptime monitoring, and more. There are flexible paid plans and a 30-day free trial.

  • Websu - Websu is an open source project to provide Lighthouse-as-a-Service through a simple HTTP REST API. The main features are ability to host and deploy in your own environment and historical Lighthouse report summaries.

  • DTEKT.IO - DTEKT is a website performance and uptime monitoring service. It uses lighthouse to provide visibility into the performance of websites from multiple locations on multiple devices. It offers three months free trial and paid plans.

  • SpeedVitals - SpeedVitals is a Lighthouse powered tool to measure web performance across multiple devices and locations. It has various features like Layout Shift Visualization, Waterfall Chart, Field Data and Resource Graphs. SpeedVitals offers both free and paid plans.

  • Lighthouse Metrics - Lighthouse Metrics gives you global performance insights with a single test. You can also monitor your websites on a daily or hourly base. Lighthouse Metrics offers free global one-time tests and performance monitoring as a paid feature with a free 14-day trial.

  • Auditzy - Auditzy™ is a robust website auditing & monitoring tool which lets you analyze your web page(s) pre-user journey. Analyze the Competitor Health Metric, Core Web Vitals, and Technology. Compare your web pages with your competitors to understand where you are leading or lagging. Real-time notification with Slack. Have Seamless Collaboration with Multiple Teams. Automate your Audits hourly, daily, weekly, and so on. It has a free trial with pay as you go plans.

  • Lighthouse Metrics China - The first Lighthouse metrics tool specifically designed for China. Experience unparalleled website monitoring capabilities with Lighthouse. Gain insights into the fluctuations of your scores and metrics within the realm of the Great Firewall of China, enabling a comprehensive understanding of the factors influencing each change. Lighthouse Metrics China offers both free and paid plans.

Lighthouse Integrations in non-Web Perf services

  • PageWatch — PageWatch is a tool to find problem pages on your website. It provides insights into spelling errors, layout issues, slow pages (powered by Lighthouse) and more. PageWatch is offered via free and paid plans.

  • Fluxguard - Fluxguard provides website DOM change monitoring orchestrated with Google Puppeteer, and audited by Lighthouse. Fluxguard is a freemium product, with monthly monitoring of up to 75 pages for free.

  • Microlink — Microlink is a cloud browser as API. It offers Lighthouse reports on demand, making it easy to build any service on top. Similar functionality is available via the underlying open-source project named browserless.

  • Wattspeed — Wattspeed is a free tool that generates snapshots - historical captures of your web pages that include Lighthouse scores, a list of technologies, W3C HTML validator results, DOM size, mixed content info, and more.

Plugins

Related projects

Other awesome open source projects that use Lighthouse.

  • auto-lighthouse - a CLI for crawling a domain and generating mobile and desktop reports for each page.
  • Exthouse - Analyze the impact of a browser extension on web performance.
  • Gimbal - An open source (MIT licensed) tool used to measure, analyze, and budget aspects of a web application. Gimbal also integrates reports with GitHub pull requests.
  • Gradle Lighthouse Plugin - An open source Gradle plugin that runs Lighthouse tests on multiple URLs and asserts category score thresholds (useful in continuous integration).
  • lighthouse-badges - Generate gh-badges (shields.io) based on Lighthouse performance.
  • lighthouse-batch - Run Lighthouse over a number of sites and generate a summary of their metrics/scores.
  • lighthouse-batch-parallel - Run multiple Lighthouse runs in parallel to accelerate the data collecting process, get the result stream (csv, json, js object) in your own process (warning: performance results may be volatile).
  • lighthouse-check-action - A GitHub Action to run Lighthouse in a workflow, featuring Slack notifications and report upload to S3.
  • lighthouse-check-orb - A CircleCI Orb to run Lighthouse in a workflow, featuring Slack notifications and report upload to S3.
  • andreasonny83/lighthouse-ci - Run Lighthouse and assert scores satisfy your custom thresholds.
  • GoogleChrome/lighthouse-ci - (official) Automate running Lighthouse for every commit, viewing the changes, and preventing regressions.
  • lighthouse-ci-action - A GitHub Action that makes it easy to run Lighthouse in CI and keep your pages small using performance budgets.
  • lighthouse-gh-reporter - Run Lighthouse in CI and report back in a comment on your pull requests
  • lighthouse-jest-example - Gather performance metrics via Lighthouse and assert results with Jest; uses Puppeteer to start Chrome with network emulation settings defined by WebPageTest.
  • lighthouse-lambda - Run Lighthouse on AWS Lambda with prebuilt stable desktop Headless Chrome.
  • lighthouse-mocha-example - Run Lighthouse performance tests with Mocha and chrome-launcher.
  • lighthouse-monitor - Run Lighthouse against all your URLs. Send metrics to any backend you want, save all reports with automatic data retention, and compare any two results in a web UI.
  • lighthouse-persist - Run Lighthouse and upload HTML reports to an AWS S3 bucket.
  • lighthouse-viewer - Render the Lighthouse JSON into a report, using the Lighthouse Report Renderer repackaged as UMD and ESM. Also available with React, Svelte and Vue wrappers.
  • lighthouse4u - LH4U provides Google Lighthouse as a service, surfaced by both a friendly UI+API, and backed by Elastic Search for easy querying and visualization.
  • react-lighthouse-viewer - Render a Lighthouse JSON report in a React Component.
  • site-audit-seo - CLI tool for SEO site audit, crawl site, lighthouse each page. Output to console and tables in csv, xlsx, json, web or Google Drive.
  • webpack-lighthouse-plugin - Run Lighthouse from a Webpack build.
  • cypress-audit - Run Lighthouse and Pa11y audits directly in your E2E test suites.
  • laravel-lighthouse - Google Lighthouse wrapper for laravel framework to run Google Lighthouse CLI with custom option and can automatically save result in your server directory.

FAQ

How does Lighthouse work?

See Lighthouse Architecture.

Why is the performance score so low? It looks fine to me.

Lighthouse reports the performance metrics as they would be experienced by a typical mobile user on a 4G connection and a mid-tier ~$200 phone. Even if it loads quickly on your device and network, users in other environments will experience the site very differently.

Read more in our guide to throttling.

Why does the performance score change so much?

Lighthouse performance scores will change due to inherent variability in web and network technologies, even if there hasn't been a code change. Test in consistent environments, run Lighthouse multiple times, and beware of variability before drawing conclusions about a performance-impacting change.

Read more in our guide to reducing variability.

Can I configure the lighthouse run?

Yes! Details in Lighthouse configuration.

How does Lighthouse use network throttling, and how can I make it better?

Good question. Network and CPU throttling are applied by default in a Lighthouse run. The network attempts to emulate slow 4G connectivity and the CPU is slowed down 4x from your machine's default speed. If you prefer to run Lighthouse without throttling, you'll have to use the CLI and disable it with the --throttling.* flags mentioned above.

Read more in our guide to network throttling.

Are results sent to a remote server?

Nope. Lighthouse runs locally, auditing a page using a local version of the Chrome browser installed on the machine. Report results are never processed or beaconed to a remote server.

How do I get localized Lighthouse results via the CLI?

Starting in Lighthouse 8.0, Lighthouse relies entirely on native Intl support and no longer uses an Intl polyfill. If you're using Node 14 or later, there should be no issue because Node is now built with full-icu by default.

However, if you're using a small-icu Node build, you may see Lighthouse log messages about your locale not being available. To remedy this, you can manually install ICU data by using the full-icu module and the --icu-data-dir node flag at launch.

How do I author custom audits to extend Lighthouse?

Tip: see Lighthouse Architecture for more information on terminology and architecture.

Lighthouse can be extended to run custom audits and gatherers that you author. This is great if you're already tracking performance metrics in your site and want to surface those metrics within a Lighthouse report.

If you're interested in running your own custom audits, check out our Custom Audit Example over in recipes.

How do I contribute?

We'd love help writing audits, fixing bugs, and making the tool more useful! See Contributing to get started.


Lighthouse logo
Lighthouse, ˈlītˌhous (n): a tower or other structure tool containing a beacon light to warn or guide ships at sea developers.

lighthouse's People

Contributors

adamraine avatar addyosmani avatar adrianaixba avatar alexnj avatar beytoven avatar boopathi avatar brendankenny avatar connorjclark avatar deepanjanroy avatar denar90 avatar ebidel avatar evenstensberg avatar exterkamp avatar jazyan avatar johnemau avatar kaycebasques avatar kdzwinel avatar khempenius avatar mattzeunert avatar midzer avatar mixed avatar patrickhulce avatar paulirish avatar paullewis avatar robdodson avatar samccone avatar snyk-bot avatar wardpeet avatar wwwillchen avatar xhmikosr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lighthouse's Issues

Gather scheduling strawman

const LIFECYCLE = {
 InititalBinding: 1,
 BeforePageload: 2,
 AfterPageLoad: 3,
 AtRest: 4, 
 


class Thingthing extends Gather {

  static gather(opts) {

    driver.waitFor(LIFECYCLE.BeforePageload)
    .then(_ => )
    .then(_ => )
    .then(_ => driver.waitFor(LIFECYCLE.AtRest)
    .then(_ => )

Disclaimer: All the names are definitely wrong.

But that said, I'm thinking this seems a nice way of managing scheduling of all gather work, so we don't step on toes and minimize repetitive work.

All gathers MUST start with a waitFor. It's possible they have a second.

Thoughts?

metric: Time to Interactive

Time 'To Interactive (or TTI) is a classic success metric across the web, games and application development. We're attempting to provide a robust version of it for the web platform.

Key goals for the metric:
  • User should feel like the page is done enough to start interacting -- they may scroll, tap on notification icons, start reading and expect the page to be not move around any longer.
  • The page should be ready for the user. It's not enough to present the appearance of done-ness. The user shouldn't have problems once they start engaging with the page, because it's either still busy or uninitialized. In other words, the page is not ready if main thread contention is preventing user interaction from being successful.

A working definition for Time To Interactive

Updated: April 6, 2016

There are two major timestamps, composed of smaller ones:

Timestamp: Visually ready for interaction

Visually ready is the Math.max() of these timestamps:

  1. domContentLoadedEventEnd has fired
    • Definition: HTML parsing has finished, all DOMContentLoaded handlers have run
  2. User-agent loading indicator is done
    • Current definition: Top frame and all iframes have fired window load event
    • Proposed definition (from cl/1860743002): top frame only: DCL ended and all layout-blocking resources, plus images that begun their request before DCL ended have finished.
    • Alternative definition (from Chrome on Android Progress Bar Enhancements - google-only, sry): top frame's DOMContentLoaded + top frame's images (who started before DCL) are loaded (IMO, we should use this one)
      • Perhaps only include we only include above-the-fold-ish images.
  3. Above-the-fold layout has stabilized
  4. What about fonts?
    • They are not necessarily addressed by above items.
    • Do we wait for inflight fonts requests? Only if the text remains invisible (if the user allows a fallback font, then we don't wait for inflight)
    • Can we ignore icon-fonts like font-awesome?
    • We should at least collect the font-painted timestamps for all webfonts
    • The Time to First Meaningful Paint: a layout-based approach algorithm considers in-flight font requests that layout more than 200 characters.
  5. Viewport and above-the-fold
    • None of the above considers what is in the viewport, and waits for resources that may be offscreen or 10,000px away.
    • Should we only consider images that are above-the-fold?

Timestamp: Main thread ready for interaction

Once visually ready has hit, we begin considering thread availability.

  1. Main thread is available
    • Definition TBD.
    • Approximate: 90% percentile main thread task duration ~ under 300ms. Maximum main thread task duration is < 400ms
    • Perhaps: System Health "input readiness" Metric by @benshayden
    • Perhaps: Ask requestIdleCallback for deadlines
      • "You call rIC until you get at least 20ms. Treat it as a proxy for page being settled. Seemed to correlate pretty well" ~@paullewis & jochen

@skyostil added: "it's super hard to congest the compositor, but a fair indicator will be availability of time on main thread because a) a lot of work will be main-thread bound and b) CPU contention (he thinks) can sometimes block the compositor from allowing scroll."

@paullewis added: "some of this may be in our wording where we explain that we're treating main thread as a proxy for interactivity"

Main-thread dependent input events
  • Question: Do we treat pages with slow-scrolling-regions (global touchstart, etc) the same as ones without?
  • Verdict: yes, until there is a CSP-like policy that makes it impossible to add these handlers

Key Data, Goals, and Scoring

  • milliseconds: navigationStart ─⇥ visually ready
  • milliseconds: navigationStart ─⇥ thread ready
  • milliseconds: visually ready ─⇥ thread ready
    • if the delta between "looks done" and "is ready" is bigger and bigger.. then you're tanking the UX and the score should represent that.

- Goal: `max(visReady,threadReady)`should be < 5000ms - Scoring TBD

Promisify the chrome-remote-interface API

I think it'd make most sense to do this on our side. That'd also allow us to handle the err in a generalized fashion.

@brendankenny and I were discussing how this would help the ergonomics of development some so we have a unified async structure.

I'm unsure on priority of improving this vs getting a few more tests written. Thoughts?

audit: Once installed, launches with a splash screen

Manual evaluation: manifest has name, background_color, theme_color, icons at 192px and up

input:
  • parsed manifest - #22
audit:
  • based on docs & comments from Mounir, the following are required:
    1. has manifest
    2. manifest is valid (enough)
    3. name
    4. background_color
    5. theme_color
    6. and icons at 192px and up

a brief interlude on.... icon sizes

.. please see #291 for all the details about icon sizes and whats required for splash, etc.

Docs: Splash screen on /web

See also "PWA validator" bookmarklet - #17

Manifests: highlight if short_name exceeds expected character count

Afaik on the Android homescreen, if a short_name contains too many characters it will be trimmed to something like My Totally Awesome App -> My Totally Awe.... It may be useful to check the spec or with Mounir regarding expected lengths for these app names and whether flagging this via Lighthouse makes sense.

I've personally run into issues with needing to manually test this on-device (or rethink the names used) to avoid truncating, so throwing the idea out there.

Cannot read property '_destroySSL' of undefined at destroySSL

Just tried running a lighthouse trace and hit the following issues:

  • launch-chrome.sh is OS X. Would it be worth considering moving this to an npm script and using which to pick the version of chrome to use

  • Node 5.9.1 is required. We could add a check for >= 5.9 and throw a warning otherwise

  • On linux I get the following error:

    TypeError: Cannot read property '_destroySSL' of undefined
    at destroySSL (_tls_wrap.js:370:7)

Going to dig into that one and try and figure out the problem but wanted to raise an issue on it..

Graceful recovery when no manifest is present

image

I suppose there's a few cases that we shouldn't bomb on.

  • link rel manifest not found in DOM
  • link rel manifest found, but it 404s
  • manifest asset retrieved but it's invalid JSON
  • manifest members are missing, and all that.

I think we have decent coverage here, but we may need to bulk up tests for the manifest gatherer.

@brendankenny wanna take a stab here?

Define trace parser preference

We can either use the DevToolsTimelineModel (which we currently do in FMP), or we can require in traceviewer.

My preference is to try and use the latter wherever we can, but fall back to the former where it offers other insights into the data.

Thoughts?

Support for User-Agent based rendered apps

Problem with apps that send different responses based on User-Agent.

  • Looks like DevTools and lighthouse(extension) debugger cannot be open at the same time - https://bugs.chromium.org/p/chromium/issues/detail?id=129539 . So device mode emulation with lighthouse extension seems not possible
  • lighthouse uses sendCommand to emulate device mode when opening a url. So you can use lighthouse extension after opening url from a lighthouse cli. But the app/manifest.json isn't called with device mode on as it is called in the scope of the extension without device mode on. So user agent check kicks in sends no manifest for desktop. And this fails all the checks depending on manifest.
  • And opening chrome with device mode "on" is also not an option - https://bugs.chromium.org/p/chromium/issues/detail?id=373169&thanks=373169&ts=1400050662

Any ideas ?

audit: Network connection is secure

Requirement: All assets served on HTTPS
Manual evaluation: Look at DevTools security panel to verify no problems, all green.

input:

audit:

  • to see if main origin is considered secure

when:

  • page loaded and in idle state

metric: Thread availability

Capturing how available the main thread is. This is a proxy metric to for potential input latency, as we cannot predict when input will hit the browser.


Update: See details at Timestamp: Main thread ready for interaction in #27

Scoring

S-curve handled by Ben's math. Basic expectations:

  • Goal: 90% percentile main thread task duration ~ under 300ms
  • Goal: Maximum main thread task duration is < 400ms
  • 100/100 is 90% percentile figure is <12ms
  • 0/100 is 90% percentile figure is <1000ms.
  • TBD: Determine curve.

Handle page load in extension

I figure we can add a control that causes a full navigation of the page so we can capture page load traces and network events. This makes the extension less 'automatic', but the alternative would be to do the capture on every page load, which would be bad.

@paulirish wdyt? Am I missing something?

add "Beta" banner to extension UI

Komoroske brought this up during review.

To manage expectations, we need to point people to this repo and indicate things are still in development

Problem: Render-blocking JavaScript

Non-async script resources that are layout-blocking.

Calculating the total time cost:
  • Grab all blocking JS requests (script net requests with priority of {Medium, High, VeryHigh})
  • Duration of each script == from fetchStart though responseEnd until ExecuteScript finishes.
  • Last critical script == last script to executescript before first meaningful paint. Discard any following.
  • Total duration == Activity time span from first script's start to last script's finish.
What we report:
  • What are the resources
  • Their individual times of (network + main thread)
  • Total activity time span.

Remove classes from Audits?

It feels like the audits are a bit unwieldy as classes. I was curious what they'd look like a bit more nude..

Here's a before and after:

image

(I've removed annotations from both, as they're particularly nasty for audits.)

@paullewis I spoke to you briefly about this and I believe you weren't sold on it. I'm in no rush, but curious what you and others think.

Add Sec audit for using CSP

Detect whether a site is using CSP rules. If no unsafe rules exist then say the site is using good CSP values. If an unsafe rule does exist, then warn that there could be possible issues with the policy being enforced.

Sound like a good audit to add?

Saving lighthouse run artifacts to disk

Over in my fMP pr I noted it'd be valuable to save the trace to disk.. mostly so that I can verify the metrics and summarized data is correct.

@brendankenny rightly pointed out that in addition to debugging, there's value for saving the artifacts locally for timeseries analysis (a la big-rig).

Personally, I'd be happy to save all my runs artifacts until i want to free up the disk space.

Any ideas on what setup would make sense? cmd-line option -> something?

audit: App can load on offline/flaky connection (phase 2)

Currently we: 1) get all SW registrations 2) filter to to see we have any in an "activated" state. This is good for right now, but it doesn't yet verify that offline caching provided via Service Worker
and it successfully page loads while offline. That'll be the next phase of this test.

Manual evaluation: Flip page offline via DevTools network throttling. Try it out

input:
  • all matched SW registrations
audit:

Something like this…

  1. flip network throttling to offline.
  2. maybe clear disk cache
  3. reload the page (or more cleanly: load about:blank and then navigate to the url)
  4. network domain can communicate if the page's network request was served via SW.
  5. verify its a 200, and page is "good"
when:
  • page loaded and in idle state
  • everything else is done?

Seperate head and body in gatherer?

At the moment viewport audit is running a regex over the entire html. It would be useful for this (and future audits) to seperate the body and the head as inputs. The other alternative would be to have these as gathers, but that feels like it wouldn't be as flexible.

Thoughts on having input.html + input.head + html body?

audit: Omnibox matches brand colors

Manual evaluation: manifest has theme_color, <head> has <meta name="theme-color" content="_any valid CSS color_">

input:
  • parsed manifest - #22
audit:
  • parse manifest for theme_color.
  • Grab DOM and parse document.head for the theme-color meta.

Currently both are required. Chrome will be transitioning to use manifest theme-color by default: go/fizz-feature-backlog has a tracking item for this.

See also "PWA validator" bookmarklet - #17

Docs: Support for theme-color in Chrome 39 for Android, and whatwg/meta-theme-color: Spec for the theme-color meta extension

metric: First meaningful paint (1.0)

Ideally, the first paint where the user feels that the primary content is visible

This means paints with only the topnav don't qualify. Any primary text must be visible (and not waiting for fonts). If an image is critical to the page (e.g. e-commerce product page), then first meaningful paint requires it to be visible.

Manual evaluation: Network Tab in DevTools timeline. Reload with screenshots/filmstrip enabled. Select the first screenshot with a meaningful view.

Input:
Audit Scoring
  • Metric's value is milliseconds from navigationStarted until this paint hit swapBuffers from GPU/browser threads.
  • Goal: first meaningful paint delivered from first 14K of the HTML document in 1000ms (on the reference device/network class)
  • Metric's score will be expressed as n/100.
    • 100/100 score is fMP in <= 1000ms
    • 90/100 is ~= 3000ms
    • TBD: Determine curve of score down to 0/100.

Link to explanations

Loving the plugin so far. One thing that might be beneficial is to link to explanations for things. ex: I'm missing a canonical URL on one of my sites but if I don't know what a canonical URL is there's no explanation for how I can add one :D

CLI should warn if Node version < 5.x

(orig reported at #73 )

Since we're using a bunch of es2015 features, node 4 isn't supported.

The cli should error and exit if you're not running using a 5.x runtime.

question: Should the error message recommend using nvm?

Extension content script should do manifest discovery

As Dion recommended, the extension can be a passive PWA inspection tool, lighting up as active when a manifest is detected.

I investigated using the chrome.declarativeContent API for this, however it is limited to visible elements, and a <link> element is not one.

The best solution appears to be a small content script, as "run_at": "document_idle", which looks for the manifest <link> in the DOM and sets the page action icon/badge accordingly.

Does that sound about right to folks?

Reference Device & Network Conditions => Throttling & Emulation

Reference Device & Network Conditions

Network Latency

  • 150ms RTT is a reasonable approximation for "typical mobile"
  • First 14K can be delivered with TCP/TLS handshakes in 4 round trips.
  • 150ms is ~= 57th percentile mobile latency
    • Based on industry research data
    • 57ms at 10th percentile. 135ms at 50th. 853ms at 90th.
    • 85ms to 280ms hits the quartiles.
    • 150ms is memorable and representative.
  • ( 150ms latency * 4 RTTs ) = 600ms
  • 1000ms = 600ms transfer time + 100ms server response time + 150ms control
    plane
    + 150ms rendering
  • 500ms (~85th percentile) would be a good approximation for "typical slow mobile"

Network throughput

  • 1.6Mbps download throughput
    • 1.6 is used in WebPageTest's 3g preset. WebPageTest experimented to find
      its preset sweet spots and used
      facebook/augmented-traffic-control
      originally.
    • Still seeking improved numbers for typical and typical-slow mobile
      throughput.
      crbug/590880

Device Class


Implementation

  • emulateNetworkConditions should be set for the page and SW
  • setCPUThrottlingRate can help us approximate.
    • Until calibration is implemented on the devtools backend, we may need to do a calibration run ourselves.
    • Let's doublecheck with @a1ph about Windows, as original commit says POSIX support only.
    • Workers may be blacklisted, let's also check about throttling service worker

audit: no plugins

Should we have an audit to discourage use of plugins? E.g. we already list use of Flash as a common mistake in our "mobile friendly documentation"

Some types of videos or content are not playable on mobile devices, such as license-constrained media or experiences that require Flash or other players that are not broadly supported on mobile devices. Unplayable content, when featured on a page of any website can be very frustrating for users.

A "simple" way to ensure that this is the case:

  • check that the site's CSP policy sets object-src 'none'
  • check that iframes have a sandbox attribute

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.