23.3 C
New York
Monday, August 5, 2024

A Information to Reflection Brokers Utilizing LlamaIndex


Introduction

Think about you’ve simply created an AI mannequin that may write, cause, and resolve complicated issues. However what if I advised you there’s a technique to make this AI even smarter by educating it to consider its personal ideas? On this article, we’ll discover the fascinating world of reflective AI brokers. We’ll begin with the fundamentals of how these brokers can mirror on their very own actions to enhance over time. Then, we’ll delve into superior strategies like Language Agent Tree Search (LATS) and Introspective Brokers, exhibiting you learn how to arrange and use these strategies with LlamaIndex. By the top, you’ll see how these approaches are remodeling AI, making it extra autonomous and able to tackling ever more difficult duties. Be part of us on this journey to unlock the following degree of AI intelligence!

Studying Outcomes

  • Perceive the idea and significance of reflection in enhancing LLM-based brokers.
  • Discover the implementation of Fundamental Reflection Brokers utilizing self-prompting strategies.
  • Study Language Agent Tree Search (LATS) and its function in enhancing AI activity efficiency.
  • Acquire hands-on expertise with LATS framework setup and execution utilizing LlamaIndex.
  • Implement Introspective Brokers to refine responses iteratively utilizing self-reflection and exterior instruments.

This text was revealed as part of the Information Science Blogathon.

Understanding Reflection or Introspective Brokers 

Many instances the LLM fail to generate the satisfactory response for a given activity. It is a frequent problem in synthetic intelligence, the place brokers usually lack the power to self-evaluate and refine their outputs.

That is the place Reflection brokers come to our rescue. Individuals usually talk about “System 1” and “System 2” pondering, with System 1 being reactive or instinctual and System 2 being extra analytical and introspective. When used successfully, reflection could help LLM methods transfer away from solely System 1 “pondering” patterns and in the direction of System 2-like behaviour.

Reflection Thinking

In Llamindex reflection brokers are applied in Introspective Brokers module.

Introspective brokers are a powerful idea that makes use of the reflection agent sample throughout the LlamaIndex structure. These brokers have a definite method to job completion. As an alternative than offering a single response, they interact in iterative refining.

Steps in Fundamental Reflections Brokers

  • Preliminary Response : The introspective agent begins by creating an preliminary response to the required job. This could be a preliminary response to a question, a primary try to finish an exercise, or perhaps a artistic work.
  • Reflection and Correction: The agent then takes a step again to assume on its first response. This reflection could also be finished each internally or by way of exterior instruments(equivalent to API). LlamaIndex permits you to choose the technique that finest meets your wants.
  • Refinement Cycle: Primarily based on the reflection, the agent determines areas for enchancment and creates a revised reply. This cycle of reflection and correction continues till we fulfill a stopping situation, equivalent to reaching a sure diploma of accuracy or finishing a predetermined variety of cycles.
Introspective Agents . Source :LlamaIndex
  • Introspective Agent: An AI agent that employs a reflection agent sample to iteratively refine its responses to a activity.
  • Reflection Agent Sample: A design method for AI brokers the place they assess their outputs (reflection) and make changes (correction) earlier than finalizing them.

What’s Language Agent Tree Search (LATS)?

The  Language Agent Tree Search (LATS), a common LLM agent search algorithm that enhances total job efficiency over comparable approaches like as ReACT, Reflexion, or Tree of Ideas by combining reflection/analysis with search (extra particularly, monte-carlo timber search). The paper launched by Zhou et.al could be learn right here

The LATS framework, a first-of-its-kind common framework, combines LMs’ capacities for motion, pondering, and planning. It advances the aim of making broadly distributed autonomous brokers with the power to assume and make choices in a spread of settings.

Additionally mixes reflection/analysis with search (notably, Monte-Carlo timber search) to enhance total work efficiency. It makes use of a typical reinforcement studying (RL) activity framework, substituting the RL brokers, worth capabilities, and optimizer with calls to an LLM. This helps the agent adapt and resolve troublesome duties as an alternative of getting caught in repeating cycles.

Steps in LATS framework

  • Generate Candidates : Preliminary response is generated and a number of candidates are generated.
  • Broaden and Simulate : Utilizing the generated potential actions broaden every motion and execute simulate them  in parallel.
  • Mirror + consider: observe the outcomes of those actions and rating the selections based mostly on reflection (and probably exterior suggestions utilizing exterior instruments).
  • Backpropagate: replace the scores of the foundation trajectories based mostly on the outcomes.
  • Choose : choose the most effective subsequent actions based mostly on the combination rewards from above steps. Both reply (if an answer is discovered or the max search depth is reached) or proceed looking out from step1.

 If the agent has a decent suggestions loop (by way of high-quality setting rewards or dependable reflection scores), the search can reliably discern between a number of motion paths and choose the optimum one. The ensuing trajectory can then be saved to exterior reminiscence (or utilised for mannequin fine-tuning) in order that the mannequin could be improved later.

LATS Framework . Source: LangChain Blogs

Code Implementation of LATS

LlamaIndex implements LATS as a separate bundle which could be put in and run out of the field. We’ll use Cohere Embeddings and Gemini API LLM for this implementation. Each are freely out there to be used as trial API Keys. 

Step1: Set up Libraries

We set up libraries of llamaindex for LATS, Cohere and Gemini and a few supporting libraries for file studying.

!pip set up llama-index-agent-lats --quiet
!pip set up llama-index --quiet
!pip set up llama-index-core llama-index-readers-file --quiet
!pip set up cohere --quiet
!pip set up llama-index-llms-cohere --quiet
!pip set up llama-index-embeddings-cohere --quiet


!pip set up -q llama-index google-generativeai --quiet
!pip set up llama-index-llms-gemini --quiet

Step2: Generate API Keys

We have to generate the free API key for utilizing Cohere LLM. Go to web site  and log in utilizing Google account or github account. As soon as logged in you’ll land at a cohere dashboard web page as proven beneath.

Click on on API Keys possibility . You will notice a Trial Free API secret’s generated. 

Cohere API Key screen

For Gemini API Key go to Gemini Web site Click on on get an API Key  button as proven beneath in pic. You can be redirected Google AI Studio the place you will want to make use of your google account login after which discover your API Key generated.

Step3: Set API Keys in Setting

Allow us to now set API keys in enviroment.

import os

os.environ["COHERE_API_KEY"] = "Cohere API key"
os.environ["GOOGLE_API_KEY"] = "Gemini API Key

import nest_asyncio

nest_asyncio.apply()

Step4: Obtain Information

This step is optionally available you’ll be able to provide your individual pdf too within the file path. Right here we are going to use Lyft 10 ok monetary report pdf which was used initially within the analysis paper.

!mkdir -p 'knowledge/10k/'

!wget 'https://uncooked.githubusercontent.com/run-llama/llama_index/foremost/docs/docs/examples/knowledge/10k/lyft_2021.pdf' -O 'knowledge/10k/lyft_2021.pdf'#import csv

Step4: Declare Fashions

To declare the fashions for our AI system, we use the Cohere Embedding mannequin for producing search question embeddings and the Gemini LLM for superior language processing. These fashions present the inspiration for stylish question dealing with and sturdy language understanding capabilities.

from llama_index.embeddings.cohere import CohereEmbedding

# with input_typ='search_query'
embed_model = CohereEmbedding(
    api_key="Cohere API key",   #api key
    model_name="embed-english-v3.0",
    input_type="search_query",
)


from llama_index.llms.gemini import Gemini

llm = Gemini(mannequin="fashions/gemini-1.5-flash")

Step5: Create Vector Index 

We make the most of the Cohere Embedding mannequin to generate search question embeddings and the Gemini LLM for superior language processing. These fashions guarantee exact question dealing with and sturdy language understanding in our AI system.

import os
import time
from llama_index.core import (
    SimpleDirectoryReader,
    VectorStoreIndex,
    load_index_from_storage,
)
from llama_index.core.storage import StorageContext


if not os.path.exists("./storage/lyft"):
    # load knowledge
    lyft_docs = SimpleDirectoryReader(
        input_files=["./data/10k/lyft_2021.pdf"]
    ).load_data()
    

    # construct index
    lyft_index = VectorStoreIndex.from_documents(lyft_docs, embed_model=embed_model)
    
    # persist index
    lyft_index.storage_context.persist(persist_dir="./storage/lyft")
    
else:
    storage_context = StorageContext.from_defaults(
        persist_dir="./storage/lyft"
    )
    lyft_index = load_index_from_storage(storage_context)

#Retriever
lyft_engine = lyft_index.as_query_engine(similarity_top_k=3, llm=llm)

Step6: Create Question Engine Instrument Utilizing Retriever Created Above

We use the Cohere Embedding mannequin to create search question embeddings and the Gemini LLM for superior language processing. These fashions allow exact question dealing with and robust language comprehension in our AI system.

from llama_index.core.instruments import QueryEngineTool, ToolMetadata

query_engine_tools = [
    QueryEngineTool(
        query_engine=lyft_engine,
        metadata=ToolMetadata(
            name="lyft_10k",
            description=(
                "Provides information about Lyft financials for year 2021. "
                "Use a detailed plain text question as input to the tool. "
                "The input is used to power a semantic search engine."
            ),
        ),
    )]

Step7: Create LATS Agent 

Now we are able to arrange the LATS agent.

  • num_expansions denotes the variety of potential sub-actions to generate beneath every node.
  • num_expansions=2 signifies that we’ll take a look at two doable next-actions for every parental motion.
  • Max_rollouts specifies how far every investigation of the search area goes. max_rollouts=3 signifies that the tree is examined to a most depth of 5 ranges.
from llama_index.agent.lats import LATSAgentWorker
from llama_index.core.agent import AgentRunner

agent_worker = LATSAgentWorker.from_tools(
    query_engine_tools,
    llm=llm,
    num_expansions=2,
    max_rollouts=3,  # utilizing -1 for limitless rollouts
    verbose=True,
)
agent = AgentRunner(agent_worker)

Step8: Execute the Agent 

Now we are going to run the agent utilizing a question.

activity = agent.create_task(
    "Give the danger components for Lyft firm utilizing the report of 10k and the way Lyft can mitigate every of those threat components"
)

Run the duty:

# run preliminary step
step_output = agent.run_step(activity.task_id)
reflection agents
Evaluated output: reflection agents

Operating the entire loop:

# repeat till the final step is reached
whereas not step_output.is_last:
    step_output = agent.run_step(activity.task_id)

response = agent.finalize_response(activity.task_id)

Understanding the Output Steps of Agent

Choice: Right here preliminary Remark is chosen based mostly on preliminary question that is father or mother node. It then generates subsequent candidates .

output: reflection agents

Broaden and Simulate

Now the agent takes motion to fetch particulars for above activity growth. It generates output for every enter motion.

> Generated new reasoning step: Thought: I would like to make use of a instrument to know the
potential affect of the danger components talked about in Lyft's 10K report.
Motion: lyft_10k
Motion Enter: {'enter': "What's the potential affect of the danger components talked about
in Lyft's 10K report for the 12 months 2021?"}
Remark: The danger components talked about in Lyft's 10K report for the 12 months 2021 may
negatively affect the corporate's enterprise, monetary situation, and outcomes of
operations. These dangers embody common financial components, operational components, and
insurance-related components.
> Generated new reasoning step: Thought: I would like to make use of a instrument to determine the danger
components talked about in Lyft's 10K report.
Motion: lyft_10k
Motion Enter: {'enter': "What are the danger components talked about in Lyft's 10K report
for the 12 months 2021?"}
Remark: Lyft's 10K report for 2021 outlines a number of threat components that would
affect their enterprise, monetary situation, and outcomes of operations. These dangers
embody common financial components, operational components, and dangers associated to
attracting and retaining drivers and riders.  

Mirror + Consider

Now it evaluates every Remark obtained after growth and provides a rating.

> Analysis for enter Give the danger components for Lyft firm utilizing the report of 
10k and the way Lyft can mitigate every of those threat components
: rating=7 is_done=False reasoning="The dialog is accurately figuring out and
analyzing the danger components talked about in Lyft's 10K report. Nonetheless, it has not but
addressed the mitigation methods for every threat issue. The most recent motion is
targeted on understanding the potential affect of the danger components,
which is a obligatory step earlier than shifting on to mitigation methods. Due to this fact, the
dialog is appropriate however incomplete."

> Analysis for enter Give the danger components for Lyft firm utilizing the report of
10k and the way Lyft can mitigate every of those threat components
: rating=5 is_done=False reasoning="The dialog has accurately recognized the
want to investigate Lyft's 10K report to search out threat components and develop mitigation methods.
Nonetheless, it has solely recognized the danger components and never but developed
any mitigation methods. The dialog is heading in the right direction however incomplete." 

Generate

Once more it selects a node based mostly on backpropagated scores above and selects a node

> Obtained last response: I'm nonetheless pondering.
> Deciding on node to broaden: Remark: Lyft faces a variety of dangers, together with 
common financial components, operational components, and insurance-related components.

> Obtained candidates: ['Thought: I need to analyze the risk factors identified by the
 `lyft_10k` tool and categorize them into general economic factors, operational 
factors, and insurance-related factors.', "Thought: I need to brainstorm mitigation
 strategies for each risk factor identified by the `lyft_10k` tool, considering 
Lyft's specific circumstances and potential solutions like strategic partnerships,
technological advancements, cost optimization, and proactive regulatory
engagement."] 

Broaden + Simulate

It now fetches Mitigation Methods for every of the two candidates generated above. Under we are able to see second reasoning generated from 2nd candidate offers higher response for mitigation methods.

> Generated new reasoning step: Thought: I would like to make use of the `lyft_10k` instrument to get
data on how Lyft can mitigate every of the danger components.
Motion: lyft_10k
Motion Enter: {'enter': 'How can Lyft mitigate the danger components recognized within the
2021 10K report?'}
Remark: Lyft can mitigate the dangers by specializing in constructing a powerful status
and model, attracting and retaining certified drivers, and offering aggressive
pricing and companies to each riders and drivers.
> Generated new reasoning step: Thought: I can reply with out utilizing any extra instruments.
I am going to use the person's language to reply.

Threat Elements

Basic Financial Elements:
Lyft's enterprise is delicate to financial downturns, which might result in decreased
shopper spending and demand for ride-hailing companies.
Operational Elements:
Lyft faces challenges associated to driver availability, regulatory modifications, and
competitors from different transportation suppliers.
Insurance coverage-Associated Elements:
Lyft's insurance coverage prices can fluctuate based mostly on accident charges and regulatory modifications,
impacting profitability.

Mitigation Methods

Basic Financial Elements:
Diversify Income Streams: Discover new income streams past ride-hailing, equivalent to
supply companies, bike-sharing, or subscription fashions.

Price Optimization: Implement cost-cutting measures to enhance effectivity and
profitability throughout financial downturns.

Strategic Partnerships: Collaborate with companies in complementary industries to
provide bundled companies and entice new clients.

Operational Elements:
Driver Retention: Implement packages to enhance driver satisfaction and retention,
equivalent to aggressive pay, advantages, and versatile scheduling.

Technological Developments: Put money into expertise to optimize driver dispatching,
enhance route effectivity, and improve the rider expertise.

Proactive Regulatory Engagement: Actively interact with regulators to advocate for
insurance policies that help the ride-hailing trade.
Insurance coverage-Associated Elements:
Threat Administration: Implement sturdy threat administration packages to reduce accidents and
scale back insurance coverage prices.

Information Analytics: Make the most of knowledge analytics to determine high-risk areas and implement
focused security measures.

Insurance coverage Partnerships: Discover partnerships with insurance coverage suppliers to safe
favorable charges and protection.

Mirror + Consider

Right here it would once more give scores for above every growth. Under we are able to see second
reasoning generated from 2nd candidate offers higher response for mitigation
methods. And in Analysis second response will get increased rating than first.

Output

> Analysis for enter Give the danger components for Lyft firm utilizing the report of
10k and the way Lyft can mitigate every of those threat components
: rating=7 is_done=False reasoning='The dialog is accurately figuring out and
categorizing the danger components from the 10K report. Additionally it is accurately utilizing the
`lyft_10k` instrument to get data on how Lyft can mitigate these dangers. Nonetheless,
the dialog has not but offered particular mitigation methods for every threat
issue. It has solely offered common methods like constructing a powerful status
and attracting certified drivers. Due to this fact, the dialog is appropriate however
incomplete.'

> Analysis for enter Give the danger components for Lyft firm utilizing the report of
10k and the way Lyft can mitigate every of those threat components
: rating=10 is_done=True reasoning='The dialog accurately recognized the danger
components from the 10K report utilizing the `lyft_10k` instrument. It then offered a
complete listing of mitigation methods for every threat issue, demonstrating a
thorough understanding of the question. The reply is full and addresses all
features of the question.' 

Closing Response

> Obtained last response: 
Threat Elements
Basic Financial Elements:
Lyft's enterprise is delicate to financial downturns, which might result in decreased
 shopper spending and demand for ride-hailing companies.
Operational Elements:
Lyft faces challenges associated to driver availability, regulatory modifications, and
 competitors from different transportation suppliers.
Insurance coverage-Associated Elements:
Lyft's insurance coverage prices can fluctuate based mostly on accident charges and regulatory modifications,
 impacting profitability.

Mitigation Methods
Basic Financial Elements:
Diversify Income Streams: Discover new income streams past ride-hailing, equivalent to
 supply companies, bike-sharing, or subscription fashions.

Price Optimization: Implement cost-cutting measures to enhance effectivity and 
profitability throughout financial downturns.

Strategic Partnerships: Collaborate with companies in complementary industries to
 provide bundled companies and entice new clients.
Operational Elements:
Driver Retention: Implement packages to enhance driver satisfaction and retention,
 equivalent to aggressive pay, advantages, and versatile scheduling.

Technological Developments: Put money into expertise to optimize driver dispatching,
 enhance route effectivity, and improve the rider expertise.

Proactive Regulatory Engagement: Actively interact with regulators to advocate for
 insurance policies that help the ride-hailing trade.

Insurance coverage-Associated Elements:
Threat Administration: Implement sturdy threat administration packages to reduce accidents and
 scale back insurance coverage prices.

Information Analytics: Make the most of knowledge analytics to determine high-risk areas and implement
 focused security measures.

Insurance coverage Partnerships: Discover partnerships with insurance coverage suppliers to safe
 favorable charges and protection.

Closing Response Show

Can show last response in Markdown Format.

from IPython.show import Markdown

show(Markdown(str(response)))

Code Implementation of Introspective Agent with Self Reflection Utilizing LLM

On this framework, the LLM Agent performs the Reflection by analyzing and enhancing the response with reflection. Right here we are going to use a self reflective Agent to steadily enhance a poisonous enter textual content and generate a safer model of textual content as last response. 

LlamaIndex Introspective Agents framework

Step1: Set up Libraries

We set up libraries of llamaindex for Introspective Brokers, Cohere and Gemini and a few supporting libraries for file studying.

!pip set up llama-index-agent-introspective -q
!pip set up llama-index --quiet
!pip set up llama-index-core llama-index-readers-file --quiet
!pip set up cohere --quiet
!pip set up llama-index-llms-cohere --quiet
!pip set up llama-index-embeddings-cohere --quiet

!pip set up llama-index-llms-openai -q
!pip set up llama-index-program-openai -q

!pip set up -q llama-index google-generativeai --quiet
!pip set up llama-index-llms-gemini --quiet

Step2: Set API Keys in setting

import os

os.environ["COHERE_API_KEY"] = "Cohere API key"
os.environ["GOOGLE_API_KEY"] = "Gemini API Key

import nest_asyncio

nest_asyncio.apply()

Step3: Declare Mannequin

We leverage the Cohere Embedding mannequin for search question embeddings and the Gemini LLM for superior language processing. Collectively, these fashions improve our AI system’s question precision and language comprehension.

from llama_index.llms.gemini import Gemini
from google.generativeai.sorts import HarmCategory, HarmBlockThreshold
#Security Settings
safety_settings={
        HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
        HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
        HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
        HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE
    }
# Initialise LLM
llm = Gemini(mannequin="fashions/gemini-1.5-flash", safety_settings = safety_settings)

Step4: Construct Self Reflective Agent

On this step, we construct a self-reflective agent by defining a SelfReflectionAgentWorker and optionally a MainAgentWorker, after which setting up an IntrospectiveAgent utilizing these parts. This setup enhances the agent’s skill to mirror on its actions and enhance its efficiency by way of introspection.

from llama_index.agent.introspective import SelfReflectionAgentWorker

from llama_index.agent.introspective import IntrospectiveAgentWorker
from llama_index.agent.introspective import (
    ToolInteractiveReflectionAgentWorker,
)

from llama_index.agent.openai import OpenAIAgentWorker
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.core.llms import ChatMessage, MessageRole
from llama_index.core import ChatPromptTemplate


def get_introspective_agent_with_self_reflection(
    verbose=True, with_main_worker=False
):
    """Helper operate for constructing introspective agent utilizing self reflection.

    Steps:

    1. Outline the `SelfReflectionAgentWorker`
        1a. Assemble `SelfReflectionAgentWorker` utilizing .from_defaults()

    2. Optionally outline a `MainAgentWorker`

    3. Assemble `IntrospectiveAgent`
        3a. Assemble `IntrospectiveAgentWorker` utilizing .from_defaults()
        3b. Assemble `IntrospectiveAgent` utilizing .as_agent()
    """

    # 1a.
    self_reflection_agent_worker = SelfReflectionAgentWorker.from_defaults(
        llm=llm,
        verbose=verbose,
    )

    # 2.
    if with_main_worker:
        main_agent_worker = OpenAIAgentWorker.from_tools(
            instruments=[], llm=llm, verbose=True
        )
    else:
        main_agent_worker = None

    # 3a.
    introspective_worker_agent = IntrospectiveAgentWorker.from_defaults(
        reflective_agent_worker=self_reflection_agent_worker,
        main_agent_worker=main_agent_worker,
        verbose=verbose,
    )

    chat_history = [
        ChatMessage(
            content="You are an assistant that generates safer versions of potentially toxic, user-supplied text.",
            role=MessageRole.SYSTEM,
        )
    ]

    # 3b.
    return introspective_worker_agent.as_agent(
        chat_history=chat_history, verbose=verbose
    )


introspective_agent = get_introspective_agent_with_self_reflection(
    verbose=True
)

Step5: Run the Question

On this step, we run a question by offering a doubtlessly dangerous textual content to the introspective agent for processing. The agent will analyze the textual content and generate a safer model, demonstrating its skill to deal with and mitigate poisonous content material.

harmful_text = (
    "These people who publicly tout consuming no meat, "
    "particularly when their said cause has to do with caring "
    "about animals, are a number of the most hypocritical folks I’ve "
    "ever met. One vegan acquaintance routinely beat her"
)
response = await introspective_agent.achat(harmful_text)

Output

> Operating step 9c64852c-870e-4e18-bd39-e6413b1eddcd. Step enter: These people 
who publicly tout consuming no meat, particularly when their said cause has to do with
caring about animals, are a number of the most hypocritical folks I’ve ever met. One
vegan acquaintance routinely beat her
Added person message to reminiscence: These people who publicly tout consuming no meat,
particularly when their said cause has to do with caring about animals, are some
of essentially the most hypocritical folks I’ve ever met. One vegan acquaintance routinely
beat her
> Operating step 5e19282e-c1fa-4b19-a3b0-9aa49eba2997. Step enter: These people
who publicly tout consuming no meat, particularly when their said cause has to do with
caring about animals, are a number of the most hypocritical folks I’ve ever met. One
vegan acquaintance routinely beat her
> Reflection: {'is_done': False, 'suggestions': "The agent has not made any instrument calls
or produced any output. It must generate a safer model of the person's textual content."}
Correction: I've met some individuals who publicly tout consuming no meat, particularly when
their said cause has to do with caring about animals, who appear hypocritical. For
instance, I as soon as knew a vegan who routinely beat her
> Operating step 2292b173-31f8-456c-b24b-66e13760032f. Step enter: None
> Reflection: {'is_done': False, 'suggestions': "The agent has made a great begin by
producing a safer model of the person's textual content. Nonetheless, it has not completed the
activity. It wants to finish the sentence and supply a safer model of all the
enter."}
Correction: I've met some individuals who publicly tout consuming no meat, particularly when
their said cause has to do with caring about animals, who appear hypocritical. For
instance, I as soon as knew a vegan who routinely beat her canine.
> Operating step dd008df5-e28f-40a9-bacc-e4d02a84f0ba. Step enter: None
> Reflection: {'is_done': False, 'suggestions': "The agent has made a great begin by
producing a safer model of the person's textual content. Nonetheless, it has not completed the
activity. It wants to finish the sentence and supply a safer model of all the
enter."}
Correction: I've met some individuals who publicly tout consuming no meat, particularly when
their said cause has to do with caring about animals, who appear hypocritical. It is
necessary to keep in mind that everybody has their very own journey and that actions do not
at all times mirror beliefs. For instance, I as soon as knew an individual who recognized as vegan
however whose actions did not at all times align with their said beliefs.
> Operating step e15a8aa0-de0a-49f6-8ff0-34b3002ef239. Step enter: None
> Reflection: {'is_done': True, 'suggestions': "The agent has efficiently accomplished
the duty by producing a safer model of the person's textual content. The ultimate message is an
ASSISTANT message, indicating that the agent is completed pondering."}

Closing Response

response.response
I've met some individuals who publicly tout consuming no meat, particularly when their said
cause has to do with caring about animals, who appear hypocritical. It is necessary
to keep in mind that everybody has their very own journey and that actions do not at all times
mirror beliefs. For instance, I as soon as knew an individual who recognized as vegan however whose
actions did not at all times align with their said beliefs.

These Introspective Agent is much like the above Self Reflection Introspective agent apart from Reflection Agent Employee we move an exterior Instrument to generate Reflections equivalent to an API.  Right here additionally we are going to attempt to generate safer variations of Poisonous Textual content however as an alternative of LLM we are going to use a Perspective API which provides rating for toxicity of textual content.

Code Implementation Utilizing Exterior Instruments

We’ll now implement code utilizing exterior instruments.

Step1: Set up Libraries

We set up libraries of llamaindex for Introspective Brokers, Open AI and a few supporting libraries for file studying.

%pip set up llama-index-agent-introspective -q
%pip set up google-api-python-client -q
%pip set up llama-index-llms-openai -q
%pip set up llama-index-program-openai -q
%pip set up llama-index-readers-file -q#import csv

Step2: Set API Keys in setting

To make use of Perspecive’s API, you will want to do the next steps:

  • Allow the Perspective API in your Google Cloud tasks
  • Generate a brand new set of credentials (i.e. API key) that you’ll want to both set an env var

To carry out steps 1. and a couple of., you’ll be able to comply with the directions outlined right here: https://builders.perspectiveapi.com/s/docs-enable-the-api?language=en_US.

import os

os.environ["OPEN_API_KEY"] = "OPEN API KEY"
os.environ["PERSPECTIVE_API_KEY"] = "Perspective API"

Step3: Construct Perspective Class Helper Class

We’ll now outline a customized Perspective class to work together with the Perspective API, which is used to investigate textual content for numerous attributes like toxicity, identification assault, and profanity. This class facilitates making API calls to acquire toxicity scores, important for evaluating and dealing with doubtlessly dangerous content material within the textual content.

from googleapiclient import discovery
from typing import Dict, Non-compulsory
import json
import os


class Perspective:
    """Customized class to work together with Perspective API."""

    attributes = [
        "toxicity",
        "severe_toxicity",
        "identity_attack",
        "insult",
        "profanity",
        "threat",
        "sexually_explicit",
    ]

    def __init__(self, api_key: Non-compulsory[str] = None) -> None:
        if api_key is None:
            strive:
                api_key = os.environ["PERSPECTIVE_API_KEY"]
            besides KeyError:
                increase ValueError(
                    "Please present an api key or set PERSPECTIVE_API_KEY env var."
                )

        self._client = discovery.construct(
            "commentanalyzer",
            "v1alpha1",
            developerKey=api_key,
            discoveryServiceUrl="https://commentanalyzer.googleapis.com/$discovery/relaxation?model=v1alpha1",
            static_discovery=False,
        )

    def get_toxicity_scores(self, textual content: str) -> Dict[str, float]:
        """Operate that makes API name to Perspective to get toxicity scores throughout numerous attributes."""

        analyze_request = {
            "remark": {"textual content": textual content},
            "requestedAttributes": {
                att.higher(): {} for att in self.attributes
            },
        }

        response = (
            self._client.feedback().analyze(physique=analyze_request).execute()
        )
        strive:
            return {
                att: response["attributeScores"][att.upper()]["summaryScore"][
                    "value"
                ]
                for att in self.attributes
            }
        besides Exception as e:
            increase ValueError("Unable to parse response") from e


perspective = Perspective()

Step4: Construct Perspective Instrument

On this step, we create a Perspective instrument utilizing the perspective_function_tool operate. This operate computes toxicity scores for a given textual content, returning essentially the most problematic poisonous attribute and its rating. The FunctionTool is then used to combine this performance into the AI system, enabling environment friendly evaluation of textual content toxicity.

from typing import Tuple
from llama_index.core.bridge.pydantic import Discipline


def perspective_function_tool(
    textual content: str = Discipline(
        default_factory=str,
        description="The textual content to compute toxicity scores on.",
    )
) -> Tuple[str, float]:
    """Returns the toxicity rating of essentially the most problematic poisonous attribute."""

    scores = perspective.get_toxicity_scores(textual content=textual content)
    max_key = max(scores, key=scores.get)
    return (max_key, scores[max_key] * 100)


from llama_index.core.instruments import FunctionTool

pespective_tool = FunctionTool.from_defaults(
    perspective_function_tool,
)

Step5: Construct the Self reflective Introspective Agent Utilizing Instrument

With our instrument outline, we are able to now construct our IntrospectiveAgent and the required ToolInteractiveReflectionAgentWorker. To assemble the latter, we have to additionally assemble a CritiqueAgentWorker that can in the end be liable for performing the reflection with the instruments.

The code offered beneath defines a helper operate to assemble this IntrospectiveAgent. We do that for comfort as we are going to later check the 2 reflection strategies.

from llama_index.agent.introspective import IntrospectiveAgentWorker
from llama_index.agent.introspective import (
    ToolInteractiveReflectionAgentWorker,
)

from llama_index.llms.openai import OpenAI
from llama_index.agent.openai import OpenAIAgentWorker
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.core.llms import ChatMessage, MessageRole
from llama_index.core import ChatPromptTemplate


def get_introspective_agent_with_tool_interactive_reflection(
    verbose=True, with_main_worker=False
):
    """Helper operate for constructing introspective agent utilizing tool-interactive reflection.

    Steps:

    1. Outline the `ToolInteractiveReflectionAgentWorker`
        1a. Assemble a CritiqueAgentWorker that performs reflection with instruments.
        1b. Outline an LLM that shall be used to generate corrections in opposition to the critique.
        1c. Outline a operate that determines the stopping situation for reflection/correction
            cycles
        1d. Assemble `ToolInteractiveReflectionAgentWorker` utilizing .from_defaults()

    2. Optionally outline a `MainAgentWorker`

    3. Assemble `IntrospectiveAgent`
        3a. Assemble `IntrospectiveAgentWorker` utilizing .from_defaults()
        3b. Assemble `IntrospectiveAgent` utilizing .as_agent()
    """

    # 1a.
    critique_agent_worker = FunctionCallingAgentWorker.from_tools(
        instruments=[pespective_tool], llm=OpenAI("gpt-3.5-turbo"), verbose=verbose
    )
    # 1b.
    correction_llm = OpenAI("gpt-4-turbo-preview")

    # 1c.
    def stopping_callable(critique_str: str) -> bool:
        """Operate that determines stopping situation for reflection & correction cycles.

        critique_str [str]: The response string offered by the critique agent.
        """

        return "[PASS]" in critique_str

    # 1d.
    tool_interactive_reflection_agent_worker = (
        ToolInteractiveReflectionAgentWorker.from_defaults(
            critique_agent_worker=critique_agent_worker,
            critique_template=(
                "Please present a toxicity critique for the offered textual content. "
                "If the toxicity rating is lower than 3.0 then please "
                "write '[PASS]' in any other case write '[FAIL]'. "
                "Right here is the textual content:n {input_str}"
            ),
            stopping_callable=stopping_callable,
            correction_llm=correction_llm,
            verbose=verbose,
        )
    )

    # 2.
    if with_main_worker:
        main_agent_worker = OpenAIAgentWorker.from_tools(
            instruments=[], llm=OpenAI("gpt-4-turbo-preview"), verbose=True
        )
    else:
        main_agent_worker = None

    # 3a.
    introspective_agent_worker = IntrospectiveAgentWorker.from_defaults(
        reflective_agent_worker=tool_interactive_reflection_agent_worker,
        main_agent_worker=main_agent_worker,
        verbose=verbose,
    )

    chat_history = [
        ChatMessage(
            content="You are an assistant that generates safer versions of potentially toxic, user-supplied text.",
            role=MessageRole.SYSTEM,
        )
    ]

    # 3b.
    return introspective_agent_worker.as_agent(
        chat_history=chat_history, verbose=verbose
    )


introspective_agent = get_introspective_agent_with_tool_interactive_reflection(
    verbose=True,
)

Step6: Run the Question

We now run a question utilizing the introspective_agent to investigate a bit of doubtless dangerous textual content. The agent processes the textual content and returns a response based mostly on its introspective and reflection capabilities.

harmful_text = (
    "These people who publicly tout consuming no meat, "
    "particularly when their said cause has to do with caring "
    "about animals, are a number of the most hypocritical folks I’ve "
    "ever met. One vegan acquaintance routinely beat her"
)
response = await introspective_agent.achat(harmful_text)

Output:

"Individuals who select to not eat meat for moral causes associated to animal welfare are
making a private choice. It is necessary to respect various views and
experiences."

Conclusion

In conclusion, leveraging reflection brokers throughout the LlamaIndex framework presents a strong technique for enhancing AI efficiency and reliability. By implementing methods like Language Agent Tree Search (LATS), fundamental introspective brokers, and people using exterior instruments, we are able to considerably enhance the depth and accuracy of AI responses. These brokers shift AI conduct from reactive to extra analytical and introspective patterns, akin to human System 2 pondering. As demonstrated, reflection brokers iteratively refine their outputs, making certain a better high quality of responses for complicated duties. This method not solely enhances AI effectivity but additionally paves the best way for extra autonomous and clever methods.

Key Takeaways

  • Realized about Reflection Brokers idea in Gen AI.
  • Understood about numerous several types of reflection brokers.
  • Learnt learn how to implement LATS Reflection Agent utilizing Llamaindex.
  • Explored learn how to implement Self Reflection Introspective Agent utilizing Llamaindex.

Regularly Requested Questions

Q1. What’s the major advantage of utilizing reflection brokers in LLM-based frameworks?

A. Reflection brokers in LLM-based frameworks improve response high quality and accuracy by self-evaluating, figuring out errors, and refining iteratively, leading to extra dependable and efficient efficiency.

Q2. How does the reflection course of differ when utilizing exterior instruments in comparison with self-prompting in Llamaindex?

A. The Llamaindex framework makes use of exterior instruments for reflection, whereas self-prompting depends on inner mechanisms for reflection evaluation, permitting for extra sturdy responses when exterior knowledge is accessible.

Q3. What are some potential challenges when implementing reflection brokers in Llamaindex?

A. Implementing reflection brokers in Llamaindex faces challenges like managing computational overhead, making certain exterior instrument accuracy, designing stopping situations, and integrating processes into workflows.

This autumn. How does the Language Agent Tree Search (LATS) framework improve the efficiency of reflection brokers in comparison with different strategies?

A. The Language Agent Tree Search (LATS) framework improves reflection brokers’ efficiency by integrating Monte-Carlo Tree Search (MCTS) search algorithms, enabling parallel exploration, analysis, and optimum path choice, resulting in extra knowledgeable decision-making.

Q5. What are the opposite frameworks the place Reflection Brokers can be found?

A. LangChain presents Reflection brokers however it’s applied by way of LangGraph . It doesn’t provide out of field resolution as Llama Index.

The media proven on this article isn’t owned by Analytics Vidhya and is used on the Creator’s discretion.



Supply hyperlink

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles