Giter VIP home page Giter VIP logo

proxycrawl-node's Introduction

ProxyCrawl node

Dependency free module for scraping and crawling websites using ProxyCrawl API

Installation

Install using npm

npm i proxycrawl

Require the necessary API class in your project.
You can get your ProxyCrawl free token from here.

const { CrawlingAPI, ScraperAPI, LeadsAPI, ScreenshotsAPI } = require('proxycrawl');

Crawling API usage

Initialize with one of your account tokens, either normal or javascript token. Then make get or post requests accordingly.

const api = new CrawlingAPI({ token: 'YOUR_TOKEN' });

GET requests

Pass the url that you want to scrape plus any options from the ones available in the API documentation.

api.get(url, options);

Example:

api.get('https://www.facebook.com/britneyspears').then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

You can pass any options from ProxyCrawl API.

Example:

api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', {
  userAgent: 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
  format: 'json'
}).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

POST requests

Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.

api.post(url, data, options);

Example:

api.post('https://producthunt.com/search', { text: 'example search' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

You can send the data as application/json instead of x-www-form-urlencoded by setting options postType as json.

api.post('https://httpbin.org/post', { some_json: 'with some value' }, { postType: 'json' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

PUT requests

Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.

api.put(url, data, options);

Example:

api.put('https://producthunt.com/search', { text: 'example search' }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Javascript requests

If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only .get is available for javascript and not .post.

const api = new CrawlingAPI({ token: 'YOUR_JAVASCRIPT_TOKEN' });
api.get('https://www.nfl.com').then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Same way you can pass javascript additional options.

api.get('https://www.freelancer.com', { pageWait: 5000 }).then(response => {
  if (response.statusCode === 200) {
    console.log(response.body);
  }
}).catch(error => console.error);

Original status and PC status

You can always get the original status and proxycrawl status from the response. Read the ProxyCrawl documentation to learn more about those status.

api.get('https://craiglist.com').then(response => {
  console.log(response.originalStatus, response.pcStatus);
}).catch(error => console.error);

Scraper API usage

Initialize the Scraper API and use it in the same way as the Crawling API (see above). Use it with your normal token.

const api = new ScraperAPI({ token: 'YOUR_TOKEN' });

api.get('https://www.amazon.com/Halo-SleepSack-Swaddle-Triangle-Neutral/dp/B01LAG1TOS').then(response => {
  if (response.statusCode === 200) {
    console.log(response.json);
  }
}).catch(error => console.error);

Leads API usage

Initialize with your Leads API token and call the getFromDomain method.

const api = new LeadsAPI({ token: 'YOUR_TOKEN' });

api.getFromDomain('somesite.com').then(response => {
  console.log(response.leads);
});

Screenshots API usage

Initialize with your Screenshots API token and call the get method, then do whatever you need with the binary content. For example save it in a file.

You can pass any of the available parameters

const api = new ScreenshotsAPI({ token: 'YOUR_TOKEN' });

api.get('https://www.amazon.com').then(response => {
  fs.writeFileSync('amazon.jpg', response.body, { encoding: 'binary' });
});

// Example with parameters
api.get('https://www.amazon.com', { device: 'mobile' }).then(response => {
  fs.writeFileSync('amazon-mobile.jpg', response.body, { encoding: 'binary' });
});

If you have questions or need help using the library, please open an issue or contact us.


Copyright 2021 ProxyCrawl

proxycrawl-node's People

Contributors

crawlbase avatar peterver avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.