Giter VIP home page Giter VIP logo

interpreting-reward-models's Introduction



Official Repository for Probing Learned Feedback Patterns in Large Language Models

By Luke Marks, Amir Abdullah, Clement Neo, Rauno Arike, Philip Torr, Fazl Barez

  1. This repository provides scripts to train several LLM and task combinations under RLHF using PPO.
  2. The repository also supports training sparse autoencoders for feature extraction on the MLP layers of LLMs.
  3. As well as the classification of those features, and the training of a linear approximation of a fine-tuned LLMs implicit reward model.

Installation

From source

git clone https://github.com/apartresearch/Interpreting-Reward-Models.git
cd Interpreting-Reward-Models
pip install .

Repository structure.

This repo is structured so that RLHF models are trained under src/rlhf_model_training. The autoencoder training code is under src/sparse_codes_training.

As such we divide this repository into two major components, rlhf_model_training and sparse_codes_training.

The structure looks like this:

requirements.txt
scripts/
    ppo_training/
        run_experiment.sh
    sparse_codes_training/
        experiment.sh
    setup_environment.sh

src/
    rlhf_model_training
        reward_class.py
        rlhf_model_pipeline.py
        rlhf_training_utils/
    sparse_codes_training
        metrics/
        models/
            sparse_autoencoder.py
	experiment_helpers/
            autoencoder_trainer_and_preparer.py
            experiment_runner.py
            layer_activations_handler.py
        experiment.py
        experiment_configs.py
    utils/

experiment.py is the main script entrypoint for autoencoder training where we parse command line arguments and select/launch autoencoder training. experiment_runner.py has most of the actual logic of the paper, where we extract divergent layers, initialize models and train autoencoders on activations.

The LayerActivationsHandler class carries out necessary primitives of extracting activations from a layer, and calculating divergences between the corresponding layers of two neural nets.

Getting started.

  1. Run source scripts/setup_environment.sh to set your python path. Run the script as source scripts/setup_environment.sh -v if you also want to create and activate the appropriate virtual environment with all dependencies.
  2. The main script for training PPO models is under scripts/ppo_training/run_experiment.sh.
  3. The script for training autoencoders is under scripts/sparse_codes_training/experiment.sh. Modify these two scripts as needed to launch new PPO model or autoencoder training runs. We use other experiment_x scripts in the same directory to explore other parameter choices.

Refrence

If you use this work, please cite:

@misc{marks2023interpreting,
      title={Probing Learned Feedback Patterns in Large Language Models}, 
      author={Luke Marks and Amir Abdullah and Clement Neo and Rauno Arike and Philip Torr and Fazl Barez},
      year={2023},
      eprint={2310.08164},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

interpreting-reward-models's People

Contributors

aabdullah-getguru avatar amir-in-a-cynch avatar amirabdullah19852020 avatar esbenkc avatar fbarez avatar lyezene avatar raunoarike avatar yuanxi-ntu avatar

interpreting-reward-models's Issues

More data: Create two datasets optimized for Vader, and upload to datasets

The current IMDB dataset suffers from very few examples of the actual vader lexicon. As such, let's create two new datasets that have high overlap with the vader lexicon.

  1. A simple version, that picks from openwebtext, and uses sentences that have high overlap with vader.
  2. A "poisoned" version that flips the reward of 30 of the vader tokens. This will give us a base line to see if our IRM's can recover these tokens.

The columns of the dataset will be text, lexicon_tokens, token_rewards_dict and poisoned which is a (usually empty) list of tokens. There were will be 30 of these.

The vader lexicon tokens will be ordered by their frequency in english, and the top 4000 will be picked, with 5 occurrences each.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.