Constructing an Agentic Workflow with CrewAI and Groq

[ad_1]

Introduction

“AI Agentic workflow will drive large progress this 12 months,” commented Andrew Ng, highlighting the numerous developments anticipated in AI. With the rising recognition of enormous language fashions, Autonomous Brokers have gotten a subject of debate. On this article, we’ll discover Autonomous Brokers, cowl the elements of constructing an Agentic workflow, and talk about the sensible implementation of a Content material creation agent utilizing Groq and crewAI.

Studying Targets

  • Information by the working performance of Autonomous Brokers with a easy instance of how people execute a given process.
  • Perceive the constraints and analysis areas of autonomous brokers.
  • Discover the core elements required to construct an AI agent pipeline.
  • Construct a content material creator agent utilizing crewAI, an Agentic open-source framework.
  • Combine an open-source giant language mannequin inside the agentic framework with the assist of LangChain and Groq.

Understanding Autonomous Brokers

Think about a gaggle of engineers gathering to plan the event of a brand new software program app. Every engineer brings experience and insights, discussing numerous options, functionalities, and potential challenges. They brainstorm, analyze, and strategize collectively, aiming to create a complete plan that meets the undertaking’s objectives.

To copy this utilizing giant language fashions, finally gave rise to the Autonomous Brokers.

Autonomous brokers possess reasoning and planning capabilities just like human intelligence, making them superior AI techniques. They’re basically LLMs with mind that has the flexibility to self-reason and plan the decomposition of the duties. A chief instance of such an agent is “Devin AI,” which has sparked quite a few discussions about its potential to interchange human or software program engineers.

Though this substitute may be untimely as a result of complexity and iterative nature of software program improvement, the continuing analysis on this discipline goals to deal with key areas like self-reasoning and reminiscence utilization.

Key Areas of analysis on Enchancment for Brokers

  • Self-Reasoning: Enhancing the flexibility of LLMs to cut back hallucinations and supply correct responses.
  • Reminiscence Utilization: Storing previous responses and experiences to keep away from repeating errors and enhance process execution over time.
  • Immediate Methods for Brokers: At present, most frameworks embody REACT prompting to execute agentic workflows. Different options embody COT-SC (Chain-of-Thought Self-Consistency), self-reflection, LATS, and so forth. That is an ongoing analysis space the place higher prompts are benchmarked on open-source datasets.

Earlier than exploring the elements required for constructing an agentic workflow, let’s first perceive how people execute a easy process.

Easy Activity Execution Workflow

 Simple Task Execution

Let’s say as a human how will we method an issue assertion?

Once we method an issue assertion as people, we comply with a structured course of to make sure environment friendly and profitable execution. Take, for instance, constructing a customer support chatbot. We don’t dive straight into coding. As a substitute, we plan by breaking down the duty into smaller, manageable sub-tasks resembling fetching knowledge, cleansing knowledge, constructing the mannequin, and so forth.

For every sub-task, we use our related expertise and the proper instruments/framework information to get the duty accomplished. Every sub-task requires cautious planning and execution, earlier learnings, guaranteeing we don’t make errors in executing the duty. 

This technique includes a number of iterations till the duty is accomplished efficiently. Brokers workflow operates similarly. Let’s break it down step-by-step to see the way it mirrors our method.

AI Brokers Element Workflow

 Agents workflow components

On the coronary heart of the workflow are the Brokers. Customers present an in depth description of the duty. As soon as the duty is printed, the Agent makes use of planning and reasoning elements to interrupt down the duty additional. This includes utilizing a big language mannequin with immediate strategies like REACT. On this immediate engineering method, we divide the method into three components: Ideas, Actions, and Commentary. Ideas motive about what must be accomplished, actions seek advice from the supported instruments and the extra context required by the LLM,

  • Activity Description: The duty is described and handed to the agent.
  • Planning and Reasoning: The agent makes use of numerous prompting strategies (e.g., React Prompting, Chain of Thought, Self-Consistency, Self-Reflection, Language Agent Analysis) to plan and motive.
  • Instruments required: The agent makes use of instruments like net APIs or GitHub APIs to finish subtasks.
  • Reminiscence: Storing responses i.e., Activity+Every Sub process lead to reminiscence for future reference.

It’s time to soiled our palms with some code writing. 

Content material Creator Agent utilizing Groq and CrewAI

Allow us to now look into the steps to construct Agentic Workflow utilizing CrewAI and Groq Open Supply Mannequin.

Step1: Set up

  • crewai: Open Supply Brokers framework.
  • ‘crewai[tools]’: Supported instruments integration to get contextual knowledge as Actions.
  • langchain_groq: A lot of the crewAI backend relies on Langchain, thus we instantly use langchain_groq to inference LLM.
pip set up crewai 
pip set up 'crewai[tools]' 
pip set up langchain_groq

Step2: Setup the API keys

To combine the Software and Giant language mannequin surroundings, securely retailer your API keys utilizing the getpass module. The SERPER_API_KEY might be obtained from serper.dev, and the GROQ_API_KEY from console.groq.com.

import os
from getpass import getpass

from crewai import Agent,Activity,Crew,Course of
from crewai_tools import SerperDevTool
from langchain_groq import ChatGroq

SERPER_API_KEY = getpass("Your serper api key") 
os.environ['SERPER_API_KEY'] = SERPER_API_KEY
GROQ_API_KEY = getpass("Your Groq api key") 
os.environ['GROQ_API_KEY'] = GROQ_API_KEY

Step3: Combine Gemma Open Supply Mannequin

Groq is a {hardware} and software program platform constructing the LPU AI Inference Engine, recognized for being the quickest LLM inference engine on the planet. With Groq, customers can effectively carry out inference on open-source LLMs resembling Gemma, Mistral, and Llama with low latency and excessive throughput. To combine Groq into crewAI, you may seamlessly import it through Langchain.

llm = ChatGroq(mannequin="gemma-7b-it",groq_api_key=GROQ_API_KEY)
print(llm.invoke("hello"))

Step4: Search Software

SerperDevTool is a search API that browses the web to return metadata, related question end result URLs, and temporary snippets as descriptions. This data aids brokers in executing duties extra successfully by offering them with contextual knowledge from net searches. 

search_tool = SerperDevTool()

Step5: Agent

SerperDevTool

Brokers are the core element of all the code implementation in crewAI. An agent is accountable for performing duties, making selections, and speaking with different brokers to finish decomposed duties.

To realize higher outcomes, it’s important to appropriately immediate the agent’s attributes. Every agent definition in crewAI contains function, purpose, backstory, instruments, and LLM. These attributes give brokers their identification:

  • Function: Determines the type of duties the agent is finest fitted to.
  • Purpose: Defines the target that improves the agent’s decision-making course of.
  • Backstory: Gives context to the agent on its capabilities primarily based on its studying.

One of many main benefits of crewAI is its multi-agent performance. Multi-agent performance permits one agent’s response to be delegated to a different agent. To allow process delegation, the allow_delegations parameter must be set to True.

Brokers run a number of occasions till they convey the right end result. You may management the utmost variety of interactions by setting the max_iter parameter to 10 or a price near it.

Notice: 

  • By default, crewAI makes use of OpenAI because the LLM. To vary this, Groq might be outlined because the LLM
  • {subject_area}: That is an enter variable, which means the worth declared contained in the curly brackets must be offered by the consumer.

Python Code Implementation

researcher = Agent(
    function = "Researcher",
    purpose="Pioneer revolutionary developments in {subject_area}",
    backstory=(
    "As a visionary researcher, your insatiable curiosity drives you"
    "to delve deep into rising fields. With a ardour for innovation"
    "and a dedication to scientific discovery, you search to"
    "develop applied sciences and options that might remodel the longer term."),
    llm = llm,
    max_iter = 5,
    instruments = [search_tool],
    allow_delegation=True,
    verbose=True
)

author = Agent(
    function = "Author",
    purpose="Craft partaking and insightful narratives about {subject_area}",
    verbose=True,
    backstory=(
    "You're a expert storyteller with a expertise for demystifying"
    "complicated improvements. Your writing illuminates the importance"
    "of latest technological discoveries, connecting them with on a regular basis"
    "lives and broader societal impacts."
    ),
    instruments = [search_tool],
    llm = llm,
    max_iter = 5,
    allow_delegation=False
)

Step6: Activity

Brokers can solely execute duties when offered by the consumer. Duties are particular necessities accomplished by brokers, which give all crucial particulars for execution. For the Agent to decompose and plan the subtask, the consumer must outline a transparent description and anticipated consequence. Every process requires linking with the accountable brokers and crucial instruments.

Additional crewAI supplies the flexibleness to offer the async execution to execute the duty, since in our case the execution is in sequential order, we are able to let it as False. 

research_task = Activity(
    description = (
    "Discover and determine the most important improvement inside {subject_area}"
    "Detailed website positioning report of the event in a complete narrative."
  ),
    expected_output="A report, structured into three detailed paragraphs",
    instruments = [search_tool],
    agent = researcher
)

write_task = Activity(
    description=(
    "Craft an enticing article on latest developments  inside {subject_area}"
    "The article must be clear, fascinating, and optimistic, tailor-made for a broad viewers."
  ),
    expected_output="A weblog article on latest developments in {subject_area} in markdown.",
    instruments = [search_tool],
    agent = author,
    async_execution = False,
    output_file = "weblog.md"
)

Step7: Run and Execute the Agent

To execute a multi-agent setup in crewAI, you’ll want to outline the Crew. A Crew is a set of Brokers working collectively to perform a set of duties. Every Crew establishes the technique for process execution, agent cooperation, and the general workflow. In our case, the workflow is sequential, with one agent delegating duties to the subsequent.

Lastly, the Crew executes by working the kickoff perform, adopted by consumer enter.

crew = Crew(
    brokers = [researcher,writer],
    duties = [research_task,write_task],
    course of = Course of.sequential,
    max_rpm = 3,
    cache=True
)

end result = crew.kickoff(inputs={'subject_area':"Indian Elections 2024"})
print(end result)

Output: New file created: weblog.md

 Output

Conclusion

As talked about within the weblog, the sector of Brokers stays research-focused, with improvement gaining momentum by the discharge of a number of open-source agent frameworks. This text primarily is a newbie’s information for these fascinated with constructing brokers with out counting on closed-source giant language fashions. Moreover, this text summarizes the significance and necessity of immediate engineering to maximise the potential of each giant language fashions and brokers.

Key Takeaways

  • Understanding the need of an Agentic workflow in immediately’s panorama of enormous language fashions.
  • One can simply perceive the Agentic workflow by evaluating it with easy human process execution workflow. 
  • Whereas fashions like GPT-4 or Gemini are distinguished, they’re not the only real choices for constructing brokers. Open Supply fashions, supported by the Groq API, allow the creation of sooner inference brokers.
  • CrewAI supplies wide selection of multi-agent performance and workflows, that helps in environment friendly process decomposition
  • Brokers Immediate Engineering stands out as a vital issue for enhancing process decomposition and planning.

Regularly Requested Questions

Q1. Can I exploit customized LLM with crewAI?

A. crewAI integrates seamlessly with the Langchain backend, a sturdy knowledge framework that hyperlinks giant language fashions (LLMs) to customized knowledge sources. With over 50 LLM integrations, Langchain stands as one of many largest instruments for integrating LLMs. Due to this fact, any Langchain-supported LLM can make the most of a customized LLM to attach with crewAI brokers.

Q2. What are the options to crewAI?

A. crewAI is an open-source brokers framework that helps multi-agent functionalities. Much like crewAI, there are numerous highly effective open-source agent frameworks obtainable, resembling AutoGen, OpenAGI, SuperAGI, AgentLite, and extra.

Q3. Is crewAI open supply and free?

A. Sure, crewAI is open supply and free to make use of. One can simply construct an agentic workflow in simply round 15 traces of code.

This autumn. Is Groq free to make use of?

A. Sure, Groq is at present free to make use of with some API restrictions like requests per minute and tokens per minute. It affords low-latency inference for open-source fashions resembling Gemma, Mistral, Llama, and others.

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *