[HN Gopher] ReAct: Synergizing Reasoning and Acting in Language ...
       ___________________________________________________________________
        
       ReAct: Synergizing Reasoning and Acting in Language Models
        
       Author : matthewfcarlson
       Score  : 36 points
       Date   : 2023-03-20 21:20 UTC (1 hours ago)
        
 (HTM) web link (react-lm.github.io)
 (TXT) w3m dump (react-lm.github.io)
        
       | minimaxir wrote:
       | The ReAct paradigm is one of the more powerful tools in the
       | recent LangChain package which allows a more batteries-included
       | approach to using it with models like GPT-3 and the ChatGPT API.
       | 
       | https://langchain.readthedocs.io/en/latest/modules/agents/im...
       | 
       | https://langchain.readthedocs.io/en/latest/modules/agents/ex...
        
       | simonw wrote:
       | I wrote my own simplest-possible implementation of ReAct in
       | Python here, which I think helps demonstrate quite how much you
       | can get done with this pattern using only a very small amount of
       | code:
       | 
       | https://til.simonwillison.net/llms/python-react-pattern
        
         | doctor_eval wrote:
         | That's nuts. Thanks for sharing.
        
         | nighthawk454 wrote:
         | Cheers, Simon - been seeing your comments around and enjoying
         | your blog and coverage of this stuff.
         | 
         | Is that prompt in your TIL really all it takes to inform it of
         | these 3 actions? That's pretty impressive. I wonder how many
         | actions it can scale to? I kind of expected some kind of
         | classifier layer to predict if an action was necessary!
        
         | kfarr wrote:
         | Love this example! No offense to OP research paper but I
         | appreciate the simplicity of your Python version instead
         | 
         | PS also thanks for this genuine LOL moment from the intro:
         | 
         | > A popular nightmare scenario for AI is giving it access to
         | tools, so it can make API calls and execute its own code and
         | generally break free of the constraints of its initial
         | environment.
         | 
         | > Let's do that now!
        
       | akomtu wrote:
       | Where is "reason" in this model? A chain of semi-related thoughts
       | isn't reason. LLMs need a set of axioms and formal logic to
       | establish truthfulness of arbitrary statements.
        
       ___________________________________________________________________
       (page generated 2023-03-20 23:00 UTC)