Giter VIP home page Giter VIP logo

finetuning-subnet's Introduction

Nous Finetuning Subnet

Nous Bittensor


Nous DiscordBittensor DiscordNetwork


License: MIT

Introduction

Note: The following documentation assumes you are familiar with basic Bittensor concepts: Miners, Validators, and incentives. If you need a primer, please check out https://docs.bittensor.com/learn/bittensor-building-blocks.

The Nous-Bittensor subnet rewards miners for fine-tuning Large Language Models (LLMs) with data generated from a continuous stream of synthetic data provided by subnet 18 (also on Bittensor). It is the first-ever continuous fine-tuning benchmark, with new data generated daily, and the first incentivized fine-tuning benchmark. Additionally, it is the first Bittensor subnet to perform true cross-boundary communication, where data from one subnet is utilized in a secondary subnet.

The mechanism works like this:

1. Miners train and periodically publish models to 🤗 Hugging Face and commit the metadata for that model to the Bittensor chain to prove the time of training.
2. Validators download the models from 🤗 Hugging Face for each miner based on the Bittensor chain metadata and continuously evaluate them, setting weights based on the performance of each model against the synthetic data. 
3. The Bittensor chain aggregates weights from all active validators using Yuma Consensus to determine the proportion of TAO emission rewarded to miners and validators.

See the Miner and Validator docs for more information about how they work, as well as setup instructions.


Incentive Mechanism

Bittensor hosts multiple incentive mechanism through which miners are evaluated by validators for performing actions well. Validators perform the process of evaluation and 'set weights', which are transactions into Bittensor's blockchain. Each incentive mechanism in Bittensor is called a 'subnet'. Weights and the amount of TAO held by the validators become inputs to Bittensor's consensus mechanism called Yuma Consensus. YC drives validators towards a consensus, agreement about the value of the work done by miners. The miners with the highest agreed upon scores are minted TAO, the network digital currency.

Miners within this subnet are evaluated based on the number of times the model they have hosted has a lower loss than another model on the network when evaluated on the latest data generated by the Cortex.t subnet. To perform well, miners must attain the lowest loss on the largest number of random batches. Finding the best model and delta at the earliest block ensures the most incentive.


Getting Started

TL;DR:

  1. Chat
  2. Leaderboard

This repo's main conversation is carried out in the Bittensor Discord. Visit the 'finetuning' channel to ask questions and get real time feedback. You can view the ongoing running of the incentive mechanism, the best miners (see 'incentive'), the most in-consensus validators (see 'vtrust') using this taostats link. The table shows all 256 participant UIDs with corresponding YC stats and earnings.

See Miner Setup to learn how to setup a Miner.

See Validator Setup to learn how to setup a Validator.


Feedback

We welcome feedback!

If you have a suggestion, please reach out on the Discord channel, or file an Issue.


License

This repository is licensed under the MIT License.

# The MIT License (MIT)
# Copyright © 2023 Yuma Rao

# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the “Software”), to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
# the Software.

# THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

finetuning-subnet's People

Contributors

chpiatt avatar jquesnelle avatar romanorac avatar surcyf123 avatar teknium1 avatar temporalyx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

finetuning-subnet's Issues

You are attempting to use Flash Attention 2.0 with a model not initialized on GPU.

error:

You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with model.to('cuda')


I am running on a newly setup H100 and all seems to be working as the miner downloads the shards, but I am seeing this error in the logs.

I am getting loss of 7-8 .

'wandb.apis.public' is not a package

Hey guys, I'm running the miner in finetuning-subnet repo, but when i run python neurons/miner.py --wallet.name default --wallet.hotkey default --hf_repo_id binhndx1922001/finetuning-miner-test, i got a error in above image. Can anyone help me?
image

epsilon penalty questions/concerns

timestamp_epsilon = 0.02

I understand the purpose of the epsilon penalty, but I think there are two problems:

  1. 2% is not a trivial amount, and the value doesn't decay over time, it's just a fixed penalty for having joined the network later
  2. even if someone manages to beat the other models with > epsilon loss reduction, the miners with earlier blocks can just download the new model and re-commit it, meaning they'd automatically win again

Perhaps just prevent commiting the same hash? Not sure what other mechanisms would work.

no module named packaging

this is in a fresh conda. installing the packaging package with pip doesn't help

Obtaining file:///code/git/finetuning-subnet
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
  Getting requirements to build editable ... done
  Installing backend dependencies ... done
  Preparing editable metadata (pyproject.toml) ... done
Collecting bittensor==6.9.3 (from finetuning-subnet==0.2.4)
  Using cached bittensor-6.9.3-py3-none-any.whl.metadata (20 kB)
Collecting huggingface-hub (from finetuning-subnet==0.2.4)
  Using cached huggingface_hub-0.22.2-py3-none-any.whl.metadata (12 kB)
Collecting matplotlib (from finetuning-subnet==0.2.4)
  Using cached matplotlib-3.8.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.8 kB)
Collecting numpy (from finetuning-subnet==0.2.4)
  Using cached numpy-1.26.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting pandas (from finetuning-subnet==0.2.4)
  Using cached pandas-2.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (19 kB)
Collecting pydantic==1.10 (from finetuning-subnet==0.2.4)
  Using cached pydantic-1.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (138 kB)
Collecting python-dotenv (from finetuning-subnet==0.2.4)
  Using cached python_dotenv-1.0.1-py3-none-any.whl.metadata (23 kB)
Collecting rich (from finetuning-subnet==0.2.4)
  Using cached rich-13.7.1-py3-none-any.whl.metadata (18 kB)
Collecting safetensors (from finetuning-subnet==0.2.4)
  Using cached safetensors-0.4.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.8 kB)
Collecting torch (from finetuning-subnet==0.2.4)
  Using cached torch-2.2.2-cp311-cp311-manylinux1_x86_64.whl.metadata (25 kB)
Collecting transformers==4.38.2 (from finetuning-subnet==0.2.4)
  Using cached transformers-4.38.2-py3-none-any.whl.metadata (130 kB)
Collecting wandb (from finetuning-subnet==0.2.4)
  Using cached wandb-0.16.6-py3-none-any.whl.metadata (10 kB)
Collecting sentencepiece (from finetuning-subnet==0.2.4)
  Using cached sentencepiece-0.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.7 kB)
Collecting jinja2>=3.0.0 (from finetuning-subnet==0.2.4)
  Using cached Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
Collecting flash-attn==2.5.5 (from finetuning-subnet==0.2.4)
  Using cached flash_attn-2.5.5.tar.gz (2.5 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [20 lines of output]
      Traceback (most recent call last):
        File "/code/git/finetuning-subnet/subnet/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/code/git/finetuning-subnet/subnet/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/code/git/finetuning-subnet/subnet/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
                 ^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-ck9af66h/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/tmp/pip-build-env-ck9af66h/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
          self.run_setup()
        File "/tmp/pip-build-env-ck9af66h/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 487, in run_setup
          super().run_setup(setup_script=setup_script)
        File "/tmp/pip-build-env-ck9af66h/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 311, in run_setup
          exec(code, locals())
        File "<string>", line 9, in <module>
      ModuleNotFoundError: No module named 'packaging'
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

ModuleNotFoundError: No module named 'model'

when i run:

python3 neurons/miner.py --netuid 6 --subtensor.network local --wallet.name kb1-coldkey --wallet.hotkey kb1-hotkey --logging.debug

i get this error:

Traceback (most recent call last):
File "/home/paperspace/finetuning-subnet/neurons/miner.py", line 28, in
from model.model_updater import ModelUpdater
ModuleNotFoundError: No module named 'model'

couldn't find the answer on the internet. the "model" folder is there.

any pointers?

/docs/miner.md

See Validator Psuedocode for more information on how they the evaluation occurs.

  • Invalid Link: The master branch of finetuning-subnet does not contain the path docs/docs/validator.md.
  • remove duplicate /docs/docs path

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.