Giter VIP home page Giter VIP logo

go-crawler's Introduction

Web Crawler

This is a robust and modular simple web crawler designed to efficiently traverse and fetch web pages. Built with a focus on extensibility, configurability, and best coding practices.

Features

  • Modular Design: Clear separation of concerns with distinct modules for fetching, link extraction, retry logic, and queue management.
  • Concurrency Control: Efficiently manages multiple concurrent requests without overwhelming target servers.
  • URL Deduplication: Ensures each URL is processed only once for optimal performance.
  • Retry Mechanism: Handles transient failures by re-enqueuing URLs with a delay.
  • Configurable: Easily swap out core components to suit different needs, thanks to the use of interfaces and the Strategy Pattern.
  • Robustness: Designed to handle edge cases and the unpredictable nature of the web.

Design Patterns

  • Single Responsibility Principle: Each module has a distinct responsibility, making the code maintainable and testable.
  • Strategy Pattern: Use of interfaces allows for runtime selection of algorithms, making the crawler highly configurable.
  • Observer Pattern: The crawler observes all the URLs fetches and delegates retry logic, ensuring resilience.

Modules

  1. Fetcher: Handles HTTP requests.
  2. Link Extractor: Parses fetched HTML content to extract links.
  3. Retry Manager: Implements retry logic for failed requests.
  4. Queue: Manages the list of URLs to be processed, ensuring concurrency control and deduplication.

Usage

crawler := NewCrawler(
    startURL,
    fetcherInstance,
    linkExtractorInstance,
    retryManagerInstance,
)
crawler.Crawl()

Replace startURL, fetcherInstance, linkExtractorInstance, and retryManagerInstance with appropriate values.

Extending the Crawler

Because of its modular design, extending the crawler is straightforward:

  • New Fetch Mechanisms: Implement the FetcherInterface to add new fetch mechanisms.
  • Link Extraction Techniques: Implement the LinkExtractorInterface for different link extraction strategies.
  • Custom Retry Logic: Implement the RetryInterface to define custom retry behaviors.

Conclusion

This web crawler is simple but yet versatile tool designed with a balance of efficiency, extensibility, and robustness.

go-crawler's People

Watchers

Zain Zafar avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.