Giter VIP home page Giter VIP logo

react's Introduction

ReAct Prompting

GPT-3 prompting code for ICLR 2023 paper ReAct: Synergizing Reasoning and Acting in Language Models.

Setup

You need to first have an OpenAI API key and store it in the environment variable OPENAI_API_KEY (see here).

Package requirement: openai, and install alfworld following instructions here.

Experiments

Run {hotpotqa,fever,alfworld,webshop}.ipynb. As HotpotQA and FEVER have large validation sets, we only run 500 random examples (see notebooks). We find PaLM and GPT-3 are better at different tasks.

HotpotQA (500 random dev, EM) FEVER (500 random dev, EM) AlfWorld (success rate) WebShop (success rate)
PaLM-540B (paper) 29.4 62.2 70.9 40
GPT-3 (davinci-002) 30.4 54 78.4 35.8

Citation

@inproceedings{yao2023react,
  title = {{ReAct}: Synergizing Reasoning and Acting in Language Models},
  author = {Yao, Shunyu and Zhao, Jeffrey and Yu, Dian and Du, Nan and Shafran, Izhak and Narasimhan, Karthik and Cao, Yuan},
  booktitle = {International Conference on Learning Representations (ICLR) },
  year = {2023},
  html = {https://arxiv.org/abs/2210.03629},
}

react's People

Contributors

ysymyth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

react's Issues

Have you considered renaming this project?

Hello, thank you for this important work and project!
I'm already seeing many references to the paradigm. The problem is that there was already a massively popular project named React. This makes searches for ReAct somewhat difficult.

cot->react & react->cot

Hello, I would like to ask if there is a code implementation for cot ->react and react ->cot mentioned in the paper

你好,我想问一下论文里提到的cot->react 和 react->cot 有代码实现吗

How can I install ReAct?

Don't give me links to Alfworld! The installations there don't work, the support is nonexistent.
How can I install ReAct on my Ubuntu 22.04?

How to finetune the small REACT model

Hi, I was wondering how could we finetune the small REACT model given the prompts generated using LLM being prompt tuned.

  1. Are we trying to use LoRA or P-Tuning for the finetuning step?

  2. How to use the prompt data?
    (1) Letting all the actions and thoughts be the input and let the final action (answer) be the output
    (2) Parse the whole ReAct process and use previous in-context info as input and current action as output
    (3) Or any other way you used?

Really appreciate your help.

How did you go about finetuning?

Hi there, I cannot seem to find any information on the fine-tuning process in your paper and this repository.

A snippet from your paper:

However, when finetuned with just 3,000 examples, ReAct becomes the best
method among the four, with PaLM-8B finetuned ReAct outperforming all PaLM-62B prompting
methods, and PaLM-62B finetuned ReAct outperforming all 540B prompting methods. In contrast,
finetuning Standard or CoT is significantly worse than finetuning ReAct or Act for both PaLM-
8/62B, as the former essentially teaches models to memorize (potentially halluincated) knowledge
facts, and the latter teaches models how to (reason and) act to access information from Wikipedia, a
more generalizable skill for knowledge reasoning.

Alfworld GPT-3 Results

Hi,
I wondered if you had more details or numbers from your GPT-3 results on Alfworld? For instance, do you have the splits of accuracy across the different subtasks (as in Table 3 in the paper)?

I would try to reproduce it, but I reckon the total cost would be > $100 and would like to avoid it if possible.

FEVER and WebShop code

Hello @ysymyth, thanks for sharing your code, excellent work! Is there any plan to release the code of FEVER and WebShop? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.