Giter VIP home page Giter VIP logo

farm-proxy's Introduction

farm-proxy-logo

License

Any content in this repository is released under our proprietary License. The current version of the License is available in this repository in a separate file, and can be also accessed at the following website: https://braiins.com/farm-proxy/license.

By downloading, copying, installing, or otherwise using all or any content from this repository, you accept all the terms and conditions set out in the License, so please, read the License carefully. If you do not agree with any of the terms and conditions of the License, do not use any content from this repository and delete or destroy any such content that is already in your possession or control.

Please, keep in mind that the License automatically renews every month and from time to time it can be changed or amended, so revisit the License regularly to keep track of any changes. The date when the License was last materially changed or amended is listed in the header of the License for convenience.

If you have any other questions, please contact us at [email protected].

Introduction

Braiins is providing to the mining world a free hashrate aggregation proxy Braiins Farm Proxy. Braiins Farm Proxy encompasses four primary components:

  • Farm Proxy
  • Configurator (headless, GUI)
  • Prometheus
  • Grafana

Go to Braiins Academy for the full documentation.

client-dashboard

Quick Start

  1. Clone the git repository git clone https://github.com/braiins/farm-proxy.git
  2. Go to the farm-proxy repository cd farm-proxy
  3. Optional: Create your preferred farm proxy settings from CLI in this step, or use GUI in step 8 (see section Configuration)
  4. Run the service stack with the command docker compose up -d
  5. Verify that farm-proxy is running docker ps
  6. Open URL http://localhost:3000 to see the Client Dashboard in Grafana
  7. Open URL http://localhost:7777 to manage Farm Proxy configuration via GUI
  8. Connect miners to the Braiins Farm Proxy (fill in the proxy URL stratum+tcp://<your_host:port> to the pool settings of the miners)

Updating Farm Proxy

If you already have Farm Proxy installed and you want to update it to the latest version, you can use the following guide:

  1. Go to the farm-proxy repository cd farm-proxy
  2. Backup your custom configuration in folder config
  3. Pull the latest version of Farm Proxy with git pull
    Note: this can overwrite your current configuration, so make sure you have a backup
  4. Restore your configuration by either:
    1. Copying it directly to the file ./config/active_profile.toml
    2. Copying it to the ./config directory and then making it active from GUI after you start Farm Proxy
  5. Start Farm Proxy with docker compose up -d

Farm Proxy Distribution

Farm Proxy can be currently run on Linux OS as a multiplatform software:

  • AMD 64bit
  • ARM 64bit
  • ARMv7

Prerequisites

At the beginning it is required to install a couple of prerequisites:

Configuration

  • Farm Proxy can be configured via CLI or via GUI.
  • Farm Proxy config files are located in directory ./config and they have be TOML files.
  • Individual config files are called and referred to as profiles.
  • Profile loaded at startup is ./config/active_profile.toml
  • If FP is run alone as a single service docker compose up -d farm-proxy, it either uses configuration that it used the last time or, if docker volumes are empty, waits for manual configuration (see below).

Farm Proxy may be reconfigured while it is running. There are two ways to reconfigure running FP:

  1. Via farm-proxy-configurator - that's oneshot docker container that reads a config file in the ./config/active_profile.toml directory and configures the running (or starting up) FP service. Running docker compose up -d farm-proxy-configurator will reload configuration of FP.
  2. Via farm-proxy-gui service - by manually editing the configuration in a graphical user interface in a web browser (available on http://localhost:7777) and saving it.

CLI configuration

  1. Copy a ./config/templates/01_minimal.toml config profile to the ./config/active_profile.toml
  2. Edit the file: Define your upstream pool, username and optionally other desired settings. Make sure you specify your pool username so that hash rate is routed to correct pool account.
  3. Start the Farm-Proxy stack with docker compose up -d

Unless the volumes are pruned with docker volume prune or docker compose down -v, FP always caches and initially loads the config it was last running with.
Important note: if you set up a profile via GUI and you plan to use only CLI option for setting up configuration, it is recommended to delete file ./config/.profile. This file contains name of active configuration for GUI that can override your configuration manually set up in file ./config/active_profile.toml via CLI.

GUI configuration

When Farm Proxy is running, you can use GUI (available on http://localhost:7777) to configure it. GUI allows you to manage user profiles (create/edit/delete) and select which profile will be active. After you save a corresponding profile, it will be created in folder ./config as a TOML file with name of the profile. On page http://localhost:7777/settings you can create a new profile with 3 different ways:

  1. As a blank profile that will be edited via GUI
  2. As a predefined profile that will be created from configuration templates and later edited
  3. As a profile that you import from already existing toml file

Note: when running GUI for the first time, GUI can incorrectly state that no profile is active, evne when profile set up with CLI is already active. This is known limitation that is bypassed as soon as you select an active config via GUI.
For advanced users: if you know you don't want to use GUI to configure Farm Proxy, you can disable GUI entirely in file docker-compose.yml by deleting/commenting out component farm-proxy-gui to save some resources.

Braiins Telemetry Configuration

Since Braiins Farm Proxy version 23.01, telemetry data are collected in order to be able to track technical issues, errors and future improvements of the application. In case the user doesn't want to send telemetry data to Braiins, he can opt out. Just disable telemetry from GUI configuration or add following lines to the proxy configuration manually:

[telemetry]
enable_farm_metrics_telemetry = false

farm-proxy's People

Contributors

lkr-braiins avatar spigi42 avatar vitficl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

farm-proxy's Issues

where to change port 8080

ERROR farm_proxy::http_api: Cannot bind address for monitoring 0.0.0.0:8080
2023-01-07T18:26:01.985816Z ERROR warp::server: error binding to 0.0.0.0:8080: error creating server listener: Address already in use (os error 98)
2023-01-07T18:26:01.985866Z INFO farm_proxy::http_api: Monitoring running on 0.0.0.0:8080

Coonections issue in the log file

In the log is time to time appear:
WARN...infra::probing: Cannot connect to the remote end
ERROR...target_quality: ConnectionError found on endpoint=stratum+tcp.......
But on the pool is everything ok, connected, hashrate is good, all ok:
Miners: Bitmain S19j 104 Th/s, stock firmware
If I use 2 targets, than same error appear for both targets, but second target does not work properly: it is connected on the pool for few minutes and disappear... I used hr_weight parameter = 95 for the first target & hr_weight parameter = 5, but 1st target is ok may be I am doing wrong configuration in config file??
As I understood that you cannot split hashrate only can split miners, ok. I have 9 miners connected to the proxy with the same ip & port address, how to connect each asic separately in order to split hashrate?

XEC (ecash)

Shows wrong hashrate if connected to viabtc pool, and a lot of invalid hashrate and 0 downstream connections, but on viabtc pool hash is ok, workers are present.
On solopool.org is everything fine in Grafana: hash, downstream connections, pool hash is also ok, no invalid hash

FP Stops after 5 minute

Hi
I tested FP on small scale and it was working ok with no problems but when I tried to test it on two containers with 600 miners it keeps stopping after about 5 to 10 minute
fp_quits.log

Prometheus volume size

Hello,

After a while I could not login to Grafana service even the login was right. In a log file I found that HW disk is full. Prometheus service service has filled 100 % of disk capacity.

In the docker compose file there was limit to automatically delete old data:

 - '--storage.tsdb.retention.time=200h'

However, disk cap was exceeded before it could erase something.

It would be nice to have an info about data size per miner per day. It would help to do the math for HDD size required.

I solved the issue by stopping the Prometheus and Grafana service. Erase prometheus volume and docker-compose up.

The issue did not affect the mining itself. Farm-proxy service was OK.

bos_referral_code error

When set bos_referral_code getting this error and farm wont start:

Error: Invalid configuration: `/conf/farm_proxy.toml'

Caused by:
    0: Unable to parse source string as toml
    1: TOML parse error at line 6, column 1
         |
       6 | bos_referral_code = "xxxxxxxxxxxxxxxx"
         | ^^^^^^^^^^^^^^^^^
       unknown field bos_referral_code, expected one of name, port, extranonce_size, validates_hash_rate, use_empty_extranonce1, submission_rate, slushpool_bos_bonus, braiinspool_bos_bonus

My config is:

[[server]]
name = "S1"
port = 3338
braiinspool_bos_bonus = "xyz"
bos_referral_code = "xxxxxxxxxxxxxxxx"

FarmProxy Fails to proxy Stratum v1 Workers to Stratum v2 Pool

Expected Behavior: configuring FP with a Stratum V2 endpoint (Braiins Pool) should permit Stratum V2 mining by all workers behind the proxy, regardless of whether they're using Stratum V1 or Stratum V2, facilitating standardizing a mixed mining environment onto Stratum V2 or operation of legacy miners on modern pool infrastructure without the need to operate against two pool endpoints (and the associated deficiencies of using Stratum V1).

Observed Behavior:
Err(NotPresent) 2023-01-16T02:40:17.487400Z INFO farm_proxy: Welcome to Farm Proxy 22.11 (commit-id: f7976df13b2bb3d2a625b30c3fd3077d8e6a04cf, is-dirty=false, additional-commits=false), rev=f7976df13b 2023-01-16T02:40:17.487446Z INFO farm_proxy: Using configuration file: /conf/farm_proxy.toml Error: Invalid configuration: /conf/farm_proxy.toml'

Caused by:
Stratum V2 is not supported URL for target. Use Stratum V1 address`

Steps to reproduce: Connect a Stratum v1 worker to Farm Proxy via Stratum V1, configure Farm Proxy to mine against a pool's Stratum v2 URL & credentials, observe that miner remains in "waiting for work" state and "docker logs farm-proxy" throws the above error.

If not yet implemented, how close is this functionality to being complete? If it is implemented, does something specific need to be done in the toml file to enable it successfully?

MRR Service (Proxy Server) issues

  1. If you put in config file 2 targets where the Primary target is MiningRigRental Service (another proxy server) and the second target is some kind of pool, than the first target does not work at all (tried any suggested ports+urls from MRR config on there web site), also connection error appears in the docker log, and rigs jumping to the 2nd target (pool) and work there with no errors and issues
  2. When you put in config file only 1 target MiningRigRentals Service, than it connected and works, but same error is still present in the docker logs
  3. When you put in config file 2 targets: 2 MRR different workers and different servers, both targets are also unreachable (any url+port does not help)
    In all 3 scenario playing with extranonce parameter does not succeed at all, but without extranonce parameter even single target MRR Service does not work. Single target MRR Service works only if you put following config:
    .....Server:
    extranonce_size = 3
    use_empty_extranonce1 = true
    .....Target:
    extranonce_size = 4

This is happen because the MRR service it is not a pool it is another "Proxy Server" and here is the point, problem
Also in the Grafana Debug Dashboard it has been noted something with a lot of rejected shares when you use to targets in the config, considering when one of the target is another proxy server
Please review this issue,
Maybe need to play with some timings/reconnect time out, etc, as MRR service quite long pickup hash rate compare to the direct pool target, and simply Proxy jumps to the pool and no waiting MRR, because on MRR in the beginning little appear small hash and status connected, and than goes hash to 0, and some time still remain status connected...
Attached print screen of the parameters, how get access here?
Screenshot from 2023-06-14 04-39-44
Or maybe I am doing config wrongly?

P.S.: admin of the MRR has reported that their proxy server extranonce is not using, because it depends of which pool you connect during rigs idle and renters pool during rent phase.

Hashrate split

How does hashrate splitting work? I've set everything up, but the hashrate goes to only one pool

Screenshot 2022-05-23 at 11 26 37

Screenshot 2022-05-23 at 11 26 54

24.06 Won't start and throw error ...

Run 'docker-compose down , git pull' and the 'dcoker-compose up -d ... It throws below error:

payam@payam-HP-EliteDesk:~/farm-proxy$ sudo docker-compose up -d
Creating network "farm-proxy_default" with the default driver
Creating volume "farm-proxy_config_cache" with default driver
Pulling farm-proxy (braiinssystems/farm-proxy:24.06)...
24.06: Pulling from braiinssystems/farm-proxy
f7b75fe1f735: Pull complete
d0908c139139: Pull complete
350981e3bd69: Pull complete
5f4c1538818a: Pull complete
Digest: sha256:217c9603ba8e6ece2109ab7ef4fde532f57637db96ce3b85a2e3c85fcae9d6c5
Status: Downloaded newer image for braiinssystems/farm-proxy:24.06
Pulling farm-proxy-gui (node:20.11-slim)...
20.11-slim: Pulling from library/node
8a1e25ce7c4f: Pull complete
503fbb4f74df: Pull complete
6c530100026f: Pull complete
ff31387ca9a1: Pull complete
09f1e69d0450: Pull complete
Digest: sha256:357deca6eb61149534d32faaf5e4b2e4fa3549c2be610ee1019bf340ea8c51ec
Status: Downloaded newer image for node:20.11-slim
Creating prometheus ... 
Creating prometheus ... error
WARNING: Host is already in use by another container

Creating farm-proxy ... done
Creating farm-proxy-gui          ... done
Creating farm-proxy-configurator ... done

ERROR: for prometheus  Cannot start service prometheus: driver failed programming external connectivity on endpoint prometheus (1b86c7d9b485952c0de369d9ef436a03db3d835741e6f9d0c430e195397ff3e4): Error starting userland proxy: listen tcp4 0.0.0.0:9090: bind: address already in use
ERROR: Encountered errors while bringing up the project.

E PROM BYPASS

S 17+ ANTMINER 3 HASHBOARDS 65 ASICS CAN NOT FLASH EPROM CAN I BYPASS TO RUN?? 3 BOARDS ? CAN I ADD CODE TO BYPASS ON KERNEL LOG OR MAIN LOG >>?? THANK YOU ITS BEEN 4 DAYS SO FAR WITH THE REPAIR DONE AND TRYING TO FLASH WITH 2 DIFFERENT CODE EDITORS THAN K YOU ?

FarmProxy Shows Workers as Connected but not individual hashrates

This may be considered a feature request, but when configured with individual workers, FarmProxy fails to show the individual worker performance: accepted shares, rejected shares, average hashrate, etc.

A view of a stacked graph and statistics similar to what would be seen on the pool manager side (such as on the Braiins worker dashboard) would be ideal, so local statistics could be viewed per worker, without needing to go to the web.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.