Giter VIP home page Giter VIP logo

cloudproxy's Introduction

CodeCov Coverage Codacy Quality Docker Cloud Build Status Contributors Forks Stargazers Issues MIT License

CloudProxy

cloudproxy

About The Project

The purpose of CloudProxy is to hide your scrapers IP behind the cloud. It allows you to spin up a pool of proxies using popular cloud providers with just an API token. No configuration needed.

CloudProxy exposes an API with the IPs and credentials of the provisioned proxies.

Providers supported:

Planned:

  • Azure
  • Scaleway
  • Vultr

Inspired by

This project was inspired by Scrapoxy, though that project no longer seems actively maintained.

The primary advantage of CloudProxy over Scrapoxy is that CloudProxy only requires an API token from a cloud provider. CloudProxy automatically deploys and configures the proxy on the cloud instances without the user needing to preconfigure or copy an image.

Please always scrape nicely, respectfully and do not slam servers.

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

All you need is:

  • Docker

Installation

Environment variables:

Required

You have two available methods of proxy authentication: username and password or IP restriction. You can use either one or both simultaneously.

  • USERNAME, PASSWORD - set the username and password for the forward proxy. The username and password should consist of alphanumeric characters. Using special characters may cause issues due to how URL encoding works.
  • ONLY_HOST_IP - set this variable to true if you want to restrict access to the proxy only to the host server (i.e., the IP address of the server running the CloudProxy Docker container).
Optional
  • AGE_LIMIT - set the age limit for your forward proxies in seconds. Once the age limit is reached, the proxy is replaced. A value of 0 disables the feature. Default: disabled.

See individual provider pages for environment variables required in above providers supported section.

Docker (recommended)

For example:

docker run -e USERNAME='CHANGE_THIS_USERNAME' \
    -e PASSWORD='CHANGE_THIS_PASSWORD' \
    -e ONLY_HOST_IP=True \
    -e DIGITALOCEAN_ENABLED=True \
    -e DIGITALOCEAN_ACCESS_TOKEN='YOUR SECRET ACCESS KEY' \
    -it -p 8000:8000 laffin/cloudproxy:latest

It is recommended to use a Docker image tagged to a version e.g. laffin/cloudproxy:0.6.0-beta, see releases for latest version.

Usage

CloudProxy exposes an API on localhost:8000. Your application can use the below API to retrieve the IPs with auth for the proxy servers deployed. Then your application can use those IPs to proxy.

The logic to cycle through IPs for proxying will need to be in your application, for example:

import random
import requests as requests


# Returns a random proxy from CloudProxy
def random_proxy():
    ips = requests.get("http://localhost:8000").json()
    return random.choice(ips['ips'])


proxies = {"http": random_proxy(), "https": random_proxy()}
my_request = requests.get("https://api.ipify.org", proxies=proxies)

CloudProxy UI

cloudproxy-ui

You can manage CloudProxy via an API and UI. You can access the UI at http://localhost:8000/ui.

You can scale up and down your proxies and remove them for each provider via the UI.

CloudProxy API

List available proxy servers

Request

GET /

curl -X 'GET' 'http://localhost:8000/' -H 'accept: application/json'

Response

{"ips":["http://username:password:192.168.0.1:8899", "http://username:password:192.168.0.2:8899"]}

List random proxy server

Request

GET /random

curl -X 'GET' 'http://localhost:8000/random' -H 'accept: application/json'

Response

["http://username:password:192.168.0.1:8899"]

Remove proxy server

Request

DELETE /destroy

curl -X 'DELETE' 'http://localhost:8000/destroy?ip_address=192.1.1.1' -H 'accept: application/json'

Response

["Proxy <{IP}> to be destroyed"]

Restart proxy server (AWS & GCP only)

Request

DELETE /restart

curl -X 'DELETE' 'http://localhost:8000/restart?ip_address=192.1.1.1' -H 'accept: application/json'

Restart

["Proxy <{IP}> to be restarted"]

Get providers

Request

GET /providers

curl -X 'GET' 'http://localhost:8000/providers' -H 'accept: application/json'

Response

{
  "digitalocean": {
    "enabled": "True",
    "ips": [
      "x.x.x.x"
    ],
    "scaling": {
      "min_scaling": 1,
      "max_scaling": 2
    },
    "size": "s-1vcpu-1gb",
    "region": "lon1"
  },
  "aws": {
    "enabled": false,
    "ips": [],
    "scaling": {
      "min_scaling": 2,
      "max_scaling": 2
    },
    "size": "t2.micro",
    "region": "eu-west-2",
    "ami": "ami-096cb92bb3580c759",
    "spot": false
  },
  "gcp": {
    "enabled": false,
    "project": null,
    "ips": [],
    "scaling": {
      "min_scaling": 2,
      "max_scaling": 2
    },
    "size": "f1-micro",
    "zone": "us-central1-a",
    "image_project": "ubuntu-os-cloud",
    "image_family": "ubuntu-minimal-2004-lts"
  },
  "hetzner": {
    "enabled": false,
    "ips": [],
    "scaling": {
      "min_scaling": 2,
      "max_scaling": 2
    },
    "size": "cx11",
    "location": "nbg1",
    "datacenter": ""
  }
}

Request

GET /providers/digitalocean

curl -X 'GET' 'http://localhost:8000/providers/digitalocean' -H 'accept: application/json'

Response

{
  "enabled": "True",
  "ips": [
    "x.x.x.x"
  ],
  "scaling": {
    "min_scaling": 2,
    "max_scaling": 2
  },
  "size": "s-1vcpu-1gb",
  "region": "lon1"
}

Update provider

Request

PATCH /providers/digitalocean

curl -X 'PATCH' 'http://localhost:8000/providers/digitalocean?min_scaling=5&max_scaling=5' -H 'accept: application/json'

Response

{
  "ips": [
    "192.1.1.2",
    "192.1.1.3"
  ],
  "scaling": {
    "min_scaling": 5,
    "max_scaling": 5
  }
}

CloudProxy runs on a schedule of every 30 seconds, it will check if the minimum scaling has been met, if not then it will deploy the required number of proxies. The new proxy info will appear in IPs once they are deployed and ready to be used.

Roadmap

The project is at early alpha with limited features. In the future more providers will be supported, autoscaling will be implemented and a rich API to allow for blacklisting and recycling of proxies.

See the open issues for a list of proposed features (and known issues).

Limitations

This method of scraping via cloud providers has limitations, many websites have anti-bot protections and blacklists in place which can limit the effectiveness of CloudProxy. Many websites block datacenter IPs and IPs may be tarnished already due to IP recycling. Rotating the CloudProxy proxies regularly may improve results. The best solution for scraping is via proxy services providing residential IPs, which are less likely to be blocked, however are much more expensive. CloudProxy is a much cheaper alternative for scraping sites that do not block datacenter IPs nor have advanced anti-bot protection. This a point frequently made when people share this project which is why I am including this in the README.

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

My target is to review all PRs within a week of being submitted, though sometimes it may be sooner or later.

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Christian Laffin - @christianlaffin - [email protected]

Project Link: https://github.com/claffin/cloudproxy

Acknowledgements

cloudproxy's People

Contributors

claffin avatar dancardverse avatar dependabot[bot] avatar dusancz avatar henryzxu avatar kingforaday avatar mrahmadt avatar xanrag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudproxy's Issues

Ghost proxies when destroying using spot in AWS

Expected Behavior

The proxies are destroyed and remain gone.

Actual Behavior

The proxies are destroyed, then some unknown time later are restarted but without the cloudproxy tag since they are started by something other than cloudproxy. They are fully functional proxies though, just missing the tag.

Steps to Reproduce the Problem

  1. Start cloudproxy
  2. Increase servers to 30, wait.
  3. Decrease servers to 5, wait.

Specifications

  • Version: 0.5.2

Solution

My guess is that when you destroy the instances you also have to remove the spot request somehow, but I don't quite understand why.

Issue with Hetzner provisioning

Hello,

No way to provision a new proxy. Probably a change on Hetzner APIs

TypeError: init() got an unexpected keyword argument 'id'

"Remove" button not working in the UI in Firefox

Expected Behavior

When the "Remove" button is clicked in the UI, a "Removing" message should appear and the proxy should get eventually removed from the proxy list. This works in MS Edge browser, and probably in Chrome too.

Actual Behavior

This does not work in Firefox.

Steps to Reproduce the Problem

  1. Launch cloudproxy with at least one proxy
  2. Open Firefox
  3. Go to http://localhost:8000/ui/
  4. Click Remove

Specifications

  • Version: Firefox 89.0.2, Windows

Statistics

Useful to see proxy statistics centrally via the UI.

Can't change the default zone of GCP Proxies

docker run -e USERNAME='xxx' -e PASSWORD='xxx' -e GCP_ENABLED=True -e GCP_PROJECT='xxx' -e GCP_SIZE='e2-micro' -e GCP_ZONE='asia-northeast3-a' -e GCP_SERVICE_ACCOUNT_KEY='xxx' -it -p 8000:8000 laffin/cloudproxy:latest

I tried this command but it always creates instances in the us default zone, I can't switch it to the different zone. Can you fix this ?

Thank you

trouble authenticating proxy / documentation of authentication for AWS

Expected Behavior

I followed the docs here https://github.com/claffin/cloudproxy and here https://github.com/claffin/cloudproxy/blob/main/docs/aws.md
I created environment variables, all alphanumerica, for USERNAME and PASSWORD. I created an IAM role as instructed, and I can see the EC2 instances. WHen using the toy example these are correctly filled in (i.e., instead of being changeme:changeme@ip it is user:password@ip).

Actual Behavior

WHen connecting to the proxy I get the error message: The administrator of this proxy has not configured it to service requests from you.


This is almost certainly from my misunderstanding the docs (as I haven't worked with AWS before). Are we meant to set the username and password somewhere in AWS too? I also tried creating a password for the IAM user and using that, but that isn't allowed to be alphanumeric. I'd also be happy to write some documentation for entry/beginners like myself once I get it up and running

Autoscaling

As in:
Deploys the minimum instances set.

To be:
Minimum instances needed to be deployed (can be 0, sleeping)
Maximum instances needed to be deployed
Scale-down after x seconds from last request
Scale-up when API receives a request

Suggestions

Great project @claffin.

Here are a few suggestions I would like to put forth:

  1. Can you change USERNAME and PASSWORD to PROXY_USERNAME and PROXY_PASSWORD as USERNAME clashes with windows env variable USERNAME.
  2. Tag instances with a Name Tag so its possible to differentiate on the aws console.

Thanks

Support multiple client applications sharing single proxy cloud

One thing I've always missed in Scrapoxy is ability to support multiple clients. Would be great to see it implemented here.

In Scrapoxy you could set (min, required, max) scaling and it works well as long as there is just one client application trying to use the proxy cloud. But as soon as you want to share same cloud between multiple applications, you run into a problem that they conflict with each other. E.g. when one application has finished crawling, it can't just downscale the cloud as it's still being used by another application etc.

Ideally that requires a centralized logic that manages requests from multiple client applications. It would need to track the most recently requested scaling for each client, and combine them. A very simple logic could be to just take max of all min/required/max parameters across clients and use that as the scaling. That way, the cloud would only downscale when the last client sends the downscale request. You can imagine logic becoming more complex though, e.g. when one client asks to destroy an instance that the other client still uses etc.

As an extra feature, it should ideally handle stale clients - if a client has not communicated with it for a while, it should disregard its requirements, to avoid leaving dangling instances when client unexpectedly disappears.

always destroy instances

Expected Behavior

an GCP and DO cloud providers always destroy instances:

2021-07-30 15:59:33.646 | INFO     | uvicorn.protocols.http.h11_impl:send:461 - 127.0.0.1:60922 - "GET /destroy HTTP/1.1" 200
2021-07-30 15:59:36.640 | INFO     | uvicorn.protocols.http.h11_impl:send:461 - 127.0.0.1:60922 - "GET /destroy HTTP/1.1" 200
2021-07-30 15:59:30.937 | INFO     | uvicorn.protocols.http.h11_impl:send:461 - 127.0.0.1:60924 - "GET /destroy HTTP/1.1" 200

Unable to authenticate through DigitalOcean

Expected Behavior

Running the command:

"docker run -e USERNAME='xxx' -e PASSWORD='xx' -e DIGITALOCEAN_ENABLED=True -e DIGITALOCEAN_ACCESS_TOKEN='xxx' -it -p 8000:8000 laffin/cloudproxy:latest

Username & Password being alphanumeric. Token validated by using:

"doctl auth init -t "xxx"

I get the following error:

File "/usr/local/lib/python3.8/site-packages/digitalocean/baseapi.py", line 233, in get_data
raise DataReadError(msg)
│ └ 'Unable to authenticate you'
└ <class 'digitalocean.DataReadError'>

I think my bug is identical to George Roscoe's. I've never had an issue running this before. I ran this a few weeks ago and it worked completely fine

Using a $ in a password causes some issues

I tried including a $ as part of my password and it caused some issues. I was able to get instances to spin up but the dashboard wasn't showing any IP addresses available.

AWS enchancements

Using the Spot market for AWS would be nice, 60-70% cheaper. Looks pretty easy, just add some InstanceMarketOptions to the create_instances call.

Also, would be nice if the ami string was a setting since that ami doesn't exist on at least eu-west-1.

Unable to authenticate DigitalOcean

Expected Behavior

Running the command:

"docker run -e USERNAME='xxx' -e PASSWORD='xx' -e DIGITALOCEAN_ENABLED=True -e DIGITALOCEAN_ACCESS_TOKEN='xxx' -it -p 8000:8000 laffin/cloudproxy:0.6.4-beta"

With username and password being alphanumeric. I've validated the token using the command:

"doctl auth init -t "xxx"

I receive the following error:

File "/usr/local/lib/python3.8/site-packages/digitalocean/baseapi.py", line 233, in get_data
raise DataReadError(msg)
│ └ 'Unable to authenticate you'
└ <class 'digitalocean.DataReadError'>

Has anyone else had issues with DigitalOcean? When I ran this a few months ago it worked completely fine.

Scrapoxy 4 is out!

Scrapoxy is a open source proxy aggregator, allowing you to manage all proxies in one place 🎯, rather than
spreading it across multiple scrapers 🕸️.

Smartly designed for efficient traffic routing 🔀, Scrapoxy minimizes #bans and boosts success rates 🚀.

The tech stack is built on the latest NodeJS, Typescript, utilizing the NestJS and Angular frameworks.

Here are the key features:

  • ☁️ Cloud Providers with easy installation: Scrapoxy supports many cloud providers like AWS, Azure, or GCP.
  • 🌐 Proxy Services: Scrapoxy supports many proxy services like Rayobyte, IPRoyal or Zyte.
  • 💻 Hardware materials: Scrapoxy supports many 4G proxy farms hardware types, like Proxidize or XProxy.io.
  • 📜 Free Proxy Lists: Scrapoxy supports lists of HTTP/HTTPS proxies and SOCKS4/SOCKS5 proxies.
  • ⏰ Timeout free: Scrapoxy only routes traffic to online proxies to avoid inactive connection.
  • 🔄 Auto-Rotate proxies: Scrapoxy automatically changes IP addresses at regular intervals.
  • 🏃 Auto-Scale proxies: Scrapoxy monitors incoming traffic and automatically scales the number of proxies according to your needs.
  • 🍪 Sticky sessions on Browser: Scrapoxy keeps the same IP address for a scraping session, even for browsers.
  • 🚨 Ban management: Scrapoxy injects the name of the proxy into the HTTP responses.
  • 📡 Traffic interception: Scrapoxy intercepts HTTP requests/responses to modify headers, keeping consistency in your scraping stack. It can add session cookies or specific headers like user-agent.
  • 📊 Traffic monitoring: Scrapoxy measures incoming and outgoing traffic to provide an overview of your scraping session.
  • 🌍 Coverage monitoring: Scrapoxy displays the geographic coverage of your proxies to better understand the global distribution of your proxies.
  • 🚀 Easy-to-use and production-ready: Scrapoxy is suitable for both beginners and experts (Kubernetes / Helm).
  • 🔓 Open Source: And of course, Scrapoxy is open source, under the MIT license.

Checkout https://scrapoxy.io/ !

Code refactor and deduplication

There is significant code duplication across the providers, particularly in the main.py file where the logic for each provider has been mixed in for general application logic.

This, and other parts, could be refactored to deduplicate. It would have the additional benefit of making it easier to add additional providers in the future.

Digitalocean Droplet created but not showing in the API

Hello

I launched the container as per the documentation, but I'm still not seeing anything in the UI nor the API, I can see it created the droplets and keep removing/adding new droplets every few minutes

What I'm missing?

export DIGITALOCEAN_ENABLED=True
export DIGITALOCEAN_ACCESS_TOKEN="XXXXX"
export DIGITALOCEAN_MIN_SCALING=2
export DIGITALOCEAN_MAX_SCALING=2
export DIGITALOCEAN_SIZE="s-1vcpu-512mb-10gb"
export DIGITALOCEAN_REGION="fra1"
export AGE_LIMIT="1200"
export USERNAME="XXX"
export PASSWORD='XXXX'

docker run -e USERNAME=$USERNAME \
    -e PASSWORD=$PASSWORD \
    -e DIGITALOCEAN_ENABLED=$DIGITALOCEAN_ENABLED \
    -e DIGITALOCEAN_ACCESS_TOKEN=$DIGITALOCEAN_ACCESS_TOKEN \
    -e DIGITALOCEAN_MIN_SCALING=$DIGITALOCEAN_MIN_SCALING \
    -e DIGITALOCEAN_MAX_SCALING=$DIGITALOCEAN_MAX_SCALING \
    -e DIGITALOCEAN_SIZE=$DIGITALOCEAN_SIZE \
    -e DIGITALOCEAN_REGION=$DIGITALOCEAN_REGION \
    -e AGE_LIMIT=$AGE_LIMIT \
    -it -p 8000:8000 laffin/cloudproxy:latest

What providers should be added next?

At the moment CloudProxy supports AWS and DigitalOcean, which is enough for my own personal use case. I'm keen to hear if there is interest in other providers being supported, please share here and I will prioritise. Otherwise, new features will be prioritised for now.

Requests to AWS starts throwing [Errno 113] No route to host

I've run into an issue that I can't seem to pinpoint so I'm not sure if it's due to CloudProxy (TinyProxy).

I've set up CloudProxy to run in Docker with 15 AWS Spot instances. Then I've written a Python Flask script that fetches the IPs from CloudProxy once every minute, accepts an URL (GET request), and returns the html page fetched through one of these AWS proxies. The reason I'm doing it this way is because my original application that uses the html data doesn't allow me to set the user agent, so I need to go through a proxy that allows this.

This is the fetch line in the Flask application (proxy):

proxies = {"http": proxy, "https": proxy}
resp = requests.get(url, headers=headers, proxies=proxies, timeout=5, allow_redirects=True, stream=True)

It can run fine for hours until suddenly all my AWS instances started dying. I went through the CloudProxy code and identified that the restarts was due to the ALIVE checks failing. So I disabled that code and also added some exceptions in my own application. It solved the instances dying, but not the original issue.

It turned out that the code line above (requests.get) suddenly starts throwing the following error:

HTTPConnectionPool(host='X.X.X.X', port=8899): Max retries exceeded with url: http://www.url.com?page=1 (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f39cd7d7550>: Failed to establish a new connection: [Errno 113] No route to host')))

I've masked the IP and url for privacy reasons.

So basically, my scripts run for hours with 1-2 requests every second until all the requests suddenly starts spitting out the exception above until it bogged down my entire WiFi. The internet on all of my computers almost stops responding. The only solution is to stop the requests, give it a few minutes and then resume like nothing happened.

After fixing the Spot instances dying, my second idea was that there's some kind of TCP limit in AWS. So I upgraded my instances from Nano to Micro with no apparent improvement. I considered it being a Docker issue but I only fetch ALIVE IPs once every minute so I can't see how that would be limited in any way. I don't see it being a TinyProxy limit since my 1-2 requests are spread out over 15 different AWS instances.

Do you know if there is any AWS limit I'm hitting or have you experienced anything similar with CloudProxy?

KeyError: 'PublicIpAddress'

AWS creates the instance but doesn't allocate the IP instantly so when CloudProxy reads the response from AWS it cannot find the IP and causes the key error.

CloudProxy should still continue to work as once AWS allocates the IP, it stops raising the exception. Will fix in near future.

Traceback (most recent call last):

  File "/usr/local/lib/python3.8/threading.py", line 890, in _bootstrap
    self._bootstrap_inner()
    │    └ <function Thread._bootstrap_inner at 0x7fd7f2af6040>
    └ <Thread(ThreadPoolExecutor-0_1, started daemon 140565377611520)>
  File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
    self.run()
    │    └ <function Thread.run at 0x7fd7f2af5d30>
    └ <Thread(ThreadPoolExecutor-0_1, started daemon 140565377611520)>
  File "/usr/local/lib/python3.8/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
    │    │        │    │        │    └ {}
    │    │        │    │        └ <Thread(ThreadPoolExecutor-0_1, started daemon 140565377611520)>
    │    │        │    └ (<weakref at 0x7fd7efa8d400; to 'ThreadPoolExecutor' at 0x7fd7f12318b0>, <_queue.SimpleQueue object at 0x7fd7efb74950>, None,...
    │    │        └ <Thread(ThreadPoolExecutor-0_1, started daemon 140565377611520)>
    │    └ <function _worker at 0x7fd7f11f2d30>
    └ <Thread(ThreadPoolExecutor-0_1, started daemon 140565377611520)>
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 80, in _worker
    work_item.run()
    │         └ <function _WorkItem.run at 0x7fd7f11f2c10>
    └ <concurrent.futures.thread._WorkItem object at 0x7fd7efa93280>
  File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
             │    │   │    │       │    └ {}
             │    │   │    │       └ <concurrent.futures.thread._WorkItem object at 0x7fd7efa93280>
             │    │   │    └ [<Job (id=251677fd631043ddbb85e5197ee666dc name=aws_manager)>, 'default', [datetime.datetime(2021, 5, 10, 10, 34, 57, 579415,...
             │    │   └ <concurrent.futures.thread._WorkItem object at 0x7fd7efa93280>
             │    └ <function run_job at 0x7fd7f11fdf70>
             └ <concurrent.futures.thread._WorkItem object at 0x7fd7efa93280>
> File "/usr/local/lib/python3.8/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
             │   │     │   │       │   └ <member 'kwargs' of 'Job' objects>
             │   │     │   │       └ <Job (id=251677fd631043ddbb85e5197ee666dc name=aws_manager)>
             │   │     │   └ <member 'args' of 'Job' objects>
             │   │     └ <Job (id=251677fd631043ddbb85e5197ee666dc name=aws_manager)>
             │   └ <member 'func' of 'Job' objects>
             └ <Job (id=251677fd631043ddbb85e5197ee666dc name=aws_manager)>

  File "/app/cloudproxy/providers/manager.py", line 15, in aws_manager
    ip_list = aws_start()
              └ <function aws_start at 0x7fd7efc91550>

  File "/app/cloudproxy/providers/aws/main.py", line 92, in aws_start
    aws_check_delete()
    └ <function aws_check_delete at 0x7fd7efc914c0>

  File "/app/cloudproxy/providers/aws/main.py", line 77, in aws_check_delete
    if instance["Instances"][0]["PublicIpAddress"] in delete_queue:
       │                                              └ set()
       └ {'Groups': [], 'Instances': [{'AmiLaunchIndex': 0, 'ImageId': 'ami-096cb92bb3580c759', 'InstanceId': 'i-0b18cc1721ffae51a', '...

KeyError: 'PublicIpAddress'

Originally posted by @sblfc in #21 (comment)

Multiple regions & Historical reporting

Hello

Thank you very much for this great script, simple and can be a replacement for scraproxy

Is it possible to define multiple regions? for example, I want to have 3-5 regions with digitalocean and cloudproxy will randomly create VMs on them

my second question is there any log file or report that I can use to check how many VMs has been created, the duration, and the period? so I can compare my hosting cost let's say on weekly bases and decide which cloud provider is better for me?

account key problem AWS not enabled

AWS not enabled

  1. I want to use AWS and I have defined AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY Done but i still ran into this problem.

The command I run it

** docker run -e USERNAME='CloudProxy2'\ -e PASSWORD='CloudP@roxy123'\ -e AWS_ENABLED=True\ -e AWS_ACCESS_KEY_ID='xxxxxxxxx'\ -e AWS_SECRET_ACCESS_KEY='xxxxxxxxxx'\ -it -p 8000:8000 laffin/cloudproxy:0.6.0-beta

Capture

Please point me. @claffin

HTTPS on proxy servers

As is:
cloudproxy -> proxy server connectivity is over HTTP, including the HTTP auth. If the client request is using HTTPS, the request remains encrypted however, the auth and any other HTTP communication are unencrypted.

To be:
cloudproxy -> proxy server, all communication over HTTPS.

UI not updating

I am running the docker command using HETZNER and although server creation is working, UI and the other endpoints are not updating (return []).

Specifications

docker run -e USERNAME='xxxxxx'
-e PASSWORD='xxxxxx'
-e HETZNER_ENABLED=True
-e HETZNER_ACCESS_TOKEN='xxxxxx'
-it -p 8000:8000 laffin/cloudproxy:latest

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.