In this tutorial, we showed how the Auto Togan Framework of Micros .Futs empower developers to orchest with a complex, multi-agent workflow with minimal code. By giving the benefits of Auto Togan’s roundrobing or teamtool abstractions, you can integrate specialist assistants such as researchers, factechers, critics, summaries and editors, in a consistent “Deepdive” tool. Auto Togan Bend – Taking, Finishing Status and Streaming Output complications, allowing you to focus on defining each agent’s skills and system prompts with Call LB Bac or plumbing with a manual prompt chain simultaneously. Regardless of manage, managing the research, recognizing facts, purifying prose, or integrating a third of the tools, providing autogene integrated APIs that provide scales for five -agent collaborators with simple two -agent pipelines.
!pip install -q autogen-agentchat(gemini) autogen-ext(openai) nest_asyncio
We install an autogene agents with the Nest_Asinsio Library to patch the OpenAI extension for Gemini support, API compatibility, and the notebook’s event loop, making sure you have all the components needed to run asynchronous, multi-agent workflow in the colab.
import os, nest_asyncio
from getpass import getpass
nest_asyncio.apply()
os.environ("GEMINI_API_KEY") = getpass("Enter your Gemini API key: ")
We import and apply nest_cinusio to enable nested event loops in a notebook environment, then safely ask for your Gemini API key using a gatepass and store it for a certified model client accense in OSNYRO.
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
model="gemini-1.5-flash-8b",
api_key=os.environ("GEMINI_API_KEY"),
api_type="google",
)
We start an open -compatible chat client, directed to Google Gemini, by setting up your stored Gemini API key, and setting up API_toge = “Google”, by specifying the Gemini -1.5 -Flash -8B model, giving you a ready -made model for a downstream autogen agents.
from autogen_agentchat.agents import AssistantAgent
researcher = AssistantAgent(name="Researcher", system_message="Gather and summarize factual info.", model_client=model_client)
factchecker = AssistantAgent(name="FactChecker", system_message="Verify facts and cite sources.", model_client=model_client)
critic = AssistantAgent(name="Critic", system_message="Critique clarity and logic.", model_client=model_client)
summarizer = AssistantAgent(name="Summarizer",system_message="Condense into a brief executive summary.", model_client=model_client)
editor = AssistantAgent(name="Editor", system_message="Polish language and signal APPROVED when done.", model_client=model_client)
We define five special assistant agents, researcher, factor, critic, summary and editor, everyone is started with a role-specific system message and shared Gemini-powered model client, enabling them to collect information respectively, accuracy, critical material, condensed language.
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination
max_msgs = MaxMessageTermination(max_messages=20)
text_term = TextMentionTermination(text="APPROVED", sources=("Editor"))
termination = max_msgs | text_term
team = RoundRobinGroupChat(
participants=(researcher, factchecker, critic, summarizer, editor),
termination_condition=termination
)
We import the roundrobing -compact class with two termination conditions, then compose a stop rule that fires after 20 total messages or when the editor refers to the agent “valid.” Finally, it snakes the round-robin team of five special agents with combined termination logic, enables them to cycle until one stop conditions are met by research, fact, critic, summary and editing.
from autogen_agentchat.tools import TeamTool
deepdive_tool = TeamTool(team=team, name="DeepDive", description="Collaborative multi-agent deep dive")
We wrap our roundrobing-compact team in a teamtool called “Dipdive” with a human-readable description, effectively package the entire multi-agent workflow in a single LA label tool that can unitedly request other agents.
host = AssistantAgent(
name="Host",
model_client=model_client,
tools=(deepdive_tool),
system_message="You have access to a DeepDive tool for in-depth research."
)
We create a “host” assistant agent configured with a shared Gemini-powered model_client, give it a dipdive team tool for oral research, and prime with a system message to request multi-agent dipped workflow.
import asyncio
async def run_deepdive(topic: str):
result = await host.run(task=f"Deep dive on: {topic}")
print("🔍 DeepDive result:\n", result)
await model_client.close()
topic = "Impacts of Model Context Protocl on Agentic AI"
loop = asyncio.get_event_loop()
loop.run_until_complete(run_deepdive(topic))
Finally, we define an asynchronous run_dipive function that asks the host agent to run a dipedive team tool on a given topic, prints a broad result, and then closes the model client; It then captures the existing acinsio loop of the colab and runs the corotene to complete for the seamless, synchronous execution.
In conclusion, integrating Google Gemini by Autojen’s OpenAI -Subject Client and wrapping our multi -agent team as a Label Label TeamTool gives us a powerful sample to create a very modular and reusable workflow. Auto Togan abstracts event loop management (with Nest_Asinsio), streaming response and termination logic, enables us to quickly repeat the agent’s roles and overall orchestrations. This advanced pattern streams the development of collaborative AI systems and laid the foundation for expansion of recovery pipelines, dynamic selectors or conditional execution strategies.
Check the notebook here. All credit for this research goes to researchers of this project. Also, feel free to follow us Twitter And don’t forget to join us 95K+ ML Subredit And subscribe Our newsletter.
Asif Razzaq is the CEO of MarketechPost Media Inc. as a visionary entrepreneur and engineer, Asif is committed to increasing the possibility of artificial intelligence for social good. Their most recent effort is the inauguration of the artificial intelligence media platform, MarktecPost, for its depth of machine learning and deep learning news for its depth of coverage .This is technically sound and easily understandable by a large audience. The platform has more than 2 million monthly views, showing its popularity among the audience.
