Introduction
“AI Agentic workflow will drive huge progress this yr,” commented Andrew Ng, highlighting the numerous developments anticipated in AI. With the rising recognition of enormous language fashions, Autonomous Brokers have gotten a subject of debate. On this article, we’ll discover Autonomous Brokers, cowl the elements of constructing an Agentic workflow, and focus on the sensible implementation of a Content material creation agent utilizing Groq and crewAI.
Studying Aims
- Information via the working performance of Autonomous Brokers with a easy instance of how people execute a given activity.
- Perceive the restrictions and analysis areas of autonomous brokers.
- Discover the core elements required to construct an AI agent pipeline.
- Construct a content material creator agent utilizing crewAI, an Agentic open-source framework.
- Combine an open-source massive language mannequin throughout the agentic framework with the assist of LangChain and Groq.
Understanding Autonomous Brokers
Think about a gaggle of engineers gathering to plan the event of a brand new software program app. Every engineer brings experience and insights, discussing numerous options, functionalities, and potential challenges. They brainstorm, analyze, and strategize collectively, aiming to create a complete plan that meets the undertaking’s objectives.
To duplicate this utilizing massive language fashions, finally gave rise to the Autonomous Brokers.
Autonomous brokers possess reasoning and planning capabilities just like human intelligence, making them superior AI programs. They’re basically LLMs with mind that has the flexibility to self-reason and plan the decomposition of the duties. A first-rate instance of such an agent is “Devin AI,” which has sparked quite a few discussions about its potential to exchange human or software program engineers.
Though this alternative could be untimely as a result of complexity and iterative nature of software program improvement, the continued analysis on this subject goals to deal with key areas like self-reasoning and reminiscence utilization.
Key Areas of analysis on Enchancment for Brokers
- Self-Reasoning: Enhancing the flexibility of LLMs to cut back hallucinations and supply correct responses.
- Reminiscence Utilization: Storing previous responses and experiences to keep away from repeating errors and enhance activity execution over time.
- Immediate Strategies for Brokers: Presently, most frameworks embody REACT prompting to execute agentic workflows. Different options embody COT-SC (Chain-of-Thought Self-Consistency), self-reflection, LATS, and so forth. That is an ongoing analysis space the place higher prompts are benchmarked on open-source datasets.
Earlier than exploring the elements required for constructing an agentic workflow, let’s first perceive how people execute a easy activity.
Easy Process Execution Workflow

Let’s say as a human how will we strategy an issue assertion?
Once we strategy an issue assertion as people, we observe a structured course of to make sure environment friendly and profitable execution. Take, for instance, constructing a customer support chatbot. We don’t dive straight into coding. As a substitute, we plan by breaking down the duty into smaller, manageable sub-tasks corresponding to fetching knowledge, cleansing knowledge, constructing the mannequin, and so forth.
For every sub-task, we use our related expertise and the precise instruments/framework information to get the duty executed. Every sub-task requires cautious planning and execution, earlier learnings, guaranteeing we don’t make errors in executing the duty.
This methodology entails a number of iterations till the duty is accomplished efficiently. Brokers workflow operates in a similar way. Let’s break it down step-by-step to see the way it mirrors our strategy.
AI Brokers Part Workflow

On the coronary heart of the workflow are the Brokers. Customers present an in depth description of the duty. As soon as the duty is printed, the Agent makes use of planning and reasoning elements to interrupt down the duty additional. This entails utilizing a big language mannequin with immediate strategies like REACT. On this immediate engineering strategy, we divide the method into three components: Ideas, Actions, and Commentary. Ideas cause about what must be executed, actions discuss with the supported instruments and the extra context required by the LLM,
- Process Description: The duty is described and handed to the agent.
- Planning and Reasoning: The agent makes use of numerous prompting strategies (e.g., React Prompting, Chain of Thought, Self-Consistency, Self-Reflection, Language Agent Analysis) to plan and cause.
- Instruments required: The agent makes use of instruments like internet APIs or GitHub APIs to finish subtasks.
- Reminiscence: Storing responses i.e., Process+Every Sub activity lead to reminiscence for future reference.
It’s time to soiled our palms with some code writing.
Content material Creator Agent utilizing Groq and CrewAI
Allow us to now look into the steps to construct Agentic Workflow utilizing CrewAI and Groq Open Supply Mannequin.
Step1: Set up
- crewai: Open Supply Brokers framework.
- ‘crewai[tools]’: Supported instruments integration to get contextual knowledge as Actions.
- langchain_groq: A lot of the crewAI backend relies on Langchain, thus we straight use langchain_groq to inference LLM.
pip set up crewai
pip set up 'crewai[tools]'
pip set up langchain_groq
Step2: Setup the API keys
To combine the Software and Giant language mannequin setting, securely retailer your API keys utilizing the getpass module. The SERPER_API_KEY will be obtained from serper.dev, and the GROQ_API_KEY from console.groq.com.
import os
from getpass import getpass
from crewai import Agent,Process,Crew,Course of
from crewai_tools import SerperDevTool
from langchain_groq import ChatGroq
SERPER_API_KEY = getpass("Your serper api key")
os.environ['SERPER_API_KEY'] = SERPER_API_KEY
GROQ_API_KEY = getpass("Your Groq api key")
os.environ['GROQ_API_KEY'] = GROQ_API_KEY
Step3: Combine Gemma Open Supply Mannequin
Groq is a {hardware} and software program platform constructing the LPU AI Inference Engine, recognized for being the quickest LLM inference engine on the planet. With Groq, customers can effectively carry out inference on open-source LLMs corresponding to Gemma, Mistral, and Llama with low latency and excessive throughput. To combine Groq into crewAI, you may seamlessly import it through Langchain.
llm = ChatGroq(mannequin="gemma-7b-it",groq_api_key=GROQ_API_KEY)
print(llm.invoke("hello"))
Step4: Search Software
SerperDevTool is a search API that browses the web to return metadata, related question consequence URLs, and temporary snippets as descriptions. This data aids brokers in executing duties extra successfully by offering them with contextual knowledge from internet searches.
search_tool = SerperDevTool()
Step5: Agent

Brokers are the core element of all the code implementation in crewAI. An agent is accountable for performing duties, making choices, and speaking with different brokers to finish decomposed duties.
To attain higher outcomes, it’s important to appropriately immediate the agent’s attributes. Every agent definition in crewAI consists of function, objective, backstory, instruments, and LLM. These attributes give brokers their identification:
- Position: Determines the form of duties the agent is greatest fitted to.
- Purpose: Defines the target that improves the agent’s decision-making course of.
- Backstory: Offers context to the agent on its capabilities based mostly on its studying.
One of many main benefits of crewAI is its multi-agent performance. Multi-agent performance permits one agent’s response to be delegated to a different agent. To allow activity delegation, the allow_delegations parameter must be set to True.
Brokers run a number of instances till they convey the proper consequence. You’ll be able to management the utmost variety of interactions by setting the max_iter parameter to 10 or a price near it.
Be aware:
- By default, crewAI makes use of OpenAI because the LLM. To alter this, Groq will be outlined because the LLM
- {subject_area}: That is an enter variable, which means the worth declared contained in the curly brackets must be offered by the person.
Python Code Implementation
researcher = Agent(
function = "Researcher",
objective="Pioneer revolutionary developments in {subject_area}",
backstory=(
"As a visionary researcher, your insatiable curiosity drives you"
"to delve deep into rising fields. With a ardour for innovation"
"and a dedication to scientific discovery, you search to"
"develop applied sciences and options that would rework the long run."),
llm = llm,
max_iter = 5,
instruments = [search_tool],
allow_delegation=True,
verbose=True
)
author = Agent(
function = "Author",
objective="Craft participating and insightful narratives about {subject_area}",
verbose=True,
backstory=(
"You're a expert storyteller with a expertise for demystifying"
"complicated improvements. Your writing illuminates the importance"
"of latest technological discoveries, connecting them with on a regular basis"
"lives and broader societal impacts."
),
instruments = [search_tool],
llm = llm,
max_iter = 5,
allow_delegation=False
)
Step6: Process
Brokers can solely execute duties when offered by the person. Duties are particular necessities accomplished by brokers, which offer all mandatory particulars for execution. For the Agent to decompose and plan the subtask, the person must outline a transparent description and anticipated end result. Every activity requires linking with the accountable brokers and mandatory instruments.
Additional crewAI gives the flexibleness to supply the async execution to execute the duty, since in our case the execution is in sequential order, we will let it as False.
research_task = Process(
description = (
"Discover and determine the foremost improvement inside {subject_area}"
"Detailed web optimization report of the event in a complete narrative."
),
expected_output="A report, structured into three detailed paragraphs",
instruments = [search_tool],
agent = researcher
)
write_task = Process(
description=(
"Craft an interesting article on current developments inside {subject_area}"
"The article must be clear, charming, and optimistic, tailor-made for a broad viewers."
),
expected_output="A weblog article on current developments in {subject_area} in markdown.",
instruments = [search_tool],
agent = author,
async_execution = False,
output_file = "weblog.md"
)
Step7: Run and Execute the Agent
To execute a multi-agent setup in crewAI, you want to outline the Crew. A Crew is a group of Brokers working collectively to perform a set of duties. Every Crew establishes the technique for activity execution, agent cooperation, and the general workflow. In our case, the workflow is sequential, with one agent delegating duties to the following.
Lastly, the Crew executes by working the kickoff perform, adopted by person enter.
crew = Crew(
brokers = [researcher,writer],
duties = [research_task,write_task],
course of = Course of.sequential,
max_rpm = 3,
cache=True
)
consequence = crew.kickoff(inputs={'subject_area':"Indian Elections 2024"})
print(consequence)
Output: New file created: weblog.md

Conclusion
As talked about within the weblog, the sector of Brokers stays research-focused, with improvement gaining momentum via the discharge of a number of open-source agent frameworks. This text primarily is a newbie’s information for these taken with constructing brokers with out counting on closed-source massive language fashions. Moreover, this text summarizes the significance and necessity of immediate engineering to maximise the potential of each massive language fashions and brokers.
Key Takeaways
- Understanding the need of an Agentic workflow in at the moment’s panorama of enormous language fashions.
- One can simply perceive the Agentic workflow by evaluating it with easy human activity execution workflow.
- Whereas fashions like GPT-4 or Gemini are distinguished, they’re not the only choices for constructing brokers. Open Supply fashions, supported by the Groq API, allow the creation of quicker inference brokers.
- CrewAI gives big selection of multi-agent performance and workflows, that helps in environment friendly activity decomposition
- Brokers Immediate Engineering stands out as an important issue for enhancing activity decomposition and planning.
Steadily Requested Questions
A. crewAI integrates seamlessly with the Langchain backend, a strong knowledge framework that hyperlinks massive language fashions (LLMs) to customized knowledge sources. With over 50 LLM integrations, Langchain stands as one of many largest instruments for integrating LLMs. Subsequently, any Langchain-supported LLM can make the most of a customized LLM to attach with crewAI brokers.
A. crewAI is an open-source brokers framework that helps multi-agent functionalities. Just like crewAI, there are numerous highly effective open-source agent frameworks out there, corresponding to AutoGen, OpenAGI, SuperAGI, AgentLite, and extra.
A. Sure, crewAI is open supply and free to make use of. One can simply construct an agentic workflow in simply round 15 strains of code.
A. Sure, Groq is at the moment free to make use of with some API restrictions like requests per minute and tokens per minute. It provides low-latency inference for open-source fashions corresponding to Gemma, Mistral, Llama, and others.


