LLM Agent¶
LlmAgent(通常簡稱為 Agent)是 Agent Development Kit (ADK) 的核心元件之一,負責擔任應用程式中的「思考」部分。它運用大型語言模型 (Large Language Model, LLM) 的強大能力,進行推理、理解自然語言、做出決策、產生回應,以及與工具(tools)互動。
與遵循預先定義執行路徑的Workflow Agents不同,LlmAgent的行為是非決定性的。它會利用 LLM 來解讀指令與上下文,動態決定如何執行、要使用哪些工具(如有需要),或是否要將控制權轉交給其他 agent。
要打造一個高效的LlmAgent,需要明確定義其身分、透過指令清楚引導其行為,並賦予所需的工具與能力。
定義 agent 的身分與目的¶
首先,你需要確立這個 agent 是什麼 以及 用途為何。
-
name(必填): 每個 agent 都需要一個唯一的字串識別碼。這個name對於內部運作至關重要,尤其是在多 agent 系統中,agent 之間需要互相指派或委派任務。請選擇能反映 agent 功能的描述性名稱(例如:customer_support_router、billing_inquiry_agent)。避免使用像user這類保留名稱。 -
description(選填,建議多 agent 系統提供): 請提供簡明扼要的 agent 能力描述。這個說明主要供其他 LLM agent 判斷是否應將任務路由給此 agent。請具體描述以區分於其他 agent(例如:「處理目前帳單明細相關詢問」,而非僅寫「帳單 agent」)。 -
model(必填): 指定支援此 agent 推理能力的底層 LLM。這是一個像"gemini-2.0-flash"的字串識別碼。模型的選擇會影響 agent 的能力、成本與效能。可參考 Models 頁面以了解可用選項與相關考量。
指導 agent:Instructions(instruction)¶
instruction 參數可以說是決定 LlmAgent 行為最關鍵的設定。它是一個字串(或回傳字串的函式),用來告訴 agent:
- 其核心任務或目標。
- 其個性或角色設定(例如:「你是一位樂於助人的助理」、「你是一位風趣的海盜」)。
- 對其行為的限制(例如:「只回答關於 X 的問題」、「絕不透露 Y」)。
- 如何以及何時使用其
tools。你應該說明每個工具的用途,以及應在什麼情境下呼叫它,補充工具本身的描述。 - 輸出結果的期望格式(例如:「以 JSON 回應」、「請提供條列清單」)。
撰寫有效 Instructions 的建議:
- 清楚且具體: 避免模稜兩可。明確說明期望的動作與結果。
- 善用 Markdown: 對於複雜的指示,可利用標題、清單等 Markdown 格式提升可讀性。
- 提供範例(Few-Shot): 若任務複雜或有特定輸出格式,請直接在指示中加入範例。
- 引導工具使用: 不僅僅列出工具,還要說明 agent 應該「何時」及「為什麼」使用這些工具。
State:
- 指令是一個字串模板,你可以使用
{var}語法將動態值插入指令中。 {var}用於插入名稱為 var 的 state 變數值。{artifact.var}用於插入名稱為 var 的 artifact 的文字內容。- 若 state 變數或 artifact 不存在,agent 會拋出錯誤。如果你想忽略該錯誤,可以在變數名稱後加上
?,如{var?}。
# Example: Adding instructions
capital_agent = LlmAgent(
model="gemini-2.0-flash",
name="capital_agent",
description="Answers user questions about the capital city of a given country.",
instruction="""You are an agent that provides the capital city of a country.
When a user asks for the capital of a country:
1. Identify the country name from the user's query.
2. Use the `get_capital_city` tool to find the capital.
3. Respond clearly to the user, stating the capital city.
Example Query: "What's the capital of {country}?"
Example Response: "The capital of France is Paris."
""",
# tools will be added next
)
// Example: Adding instructions
LlmAgent capitalAgent =
LlmAgent.builder()
.model("gemini-2.0-flash")
.name("capital_agent")
.description("Answers user questions about the capital city of a given country.")
.instruction(
"""
You are an agent that provides the capital city of a country.
When a user asks for the capital of a country:
1. Identify the country name from the user's query.
2. Use the `get_capital_city` tool to find the capital.
3. Respond clearly to the user, stating the capital city.
Example Query: "What's the capital of {country}?"
Example Response: "The capital of France is Paris."
""")
// tools will be added next
.build();
(注意:若要對系統中所有 agent 套用指令,建議在 root agent 上使用
global_instruction,詳細說明請參見 Multi-Agents 章節。)
裝備 agent:tools(tools)¶
tools 能讓你的 LlmAgent 擁有超越大型語言模型 (LLM) 內建知識或推理的能力。它們允許 agent 與外部世界互動、執行運算、取得即時資料,或執行特定動作。
tools(可選): 提供 agent 可使用的 tools 清單。清單中的每個項目可以是:- 原生函式或方法(包裝為
FunctionTool)。Python Agent Development Kit (ADK) 會自動將原生函式包裝成FuntionTool,而在 Java 中則需你自行使用FunctionTool.create(...)進行包裝。 - 繼承自
BaseTool的類別實例。 - 另一個 agent 的實例(
AgentTool,可實現 agent 之間的委派—詳見 Multi-Agents)。
- 原生函式或方法(包裝為
大型語言模型 (LLM) 會根據工具/函式名稱、描述(來自 docstring 或 description 欄位)以及參數 schema,依據對話內容與指令決定要呼叫哪個 tool。
# Define a tool function
def get_capital_city(country: str) -> str:
"""Retrieves the capital city for a given country."""
# Replace with actual logic (e.g., API call, database lookup)
capitals = {"france": "Paris", "japan": "Tokyo", "canada": "Ottawa"}
return capitals.get(country.lower(), f"Sorry, I don't know the capital of {country}.")
# Add the tool to the agent
capital_agent = LlmAgent(
model="gemini-2.0-flash",
name="capital_agent",
description="Answers user questions about the capital city of a given country.",
instruction="""You are an agent that provides the capital city of a country... (previous instruction text)""",
tools=[get_capital_city] # Provide the function directly
)
// Define a tool function
// Retrieves the capital city of a given country.
public static Map<String, Object> getCapitalCity(
@Schema(name = "country", description = "The country to get capital for")
String country) {
// Replace with actual logic (e.g., API call, database lookup)
Map<String, String> countryCapitals = new HashMap<>();
countryCapitals.put("canada", "Ottawa");
countryCapitals.put("france", "Paris");
countryCapitals.put("japan", "Tokyo");
String result =
countryCapitals.getOrDefault(
country.toLowerCase(), "Sorry, I couldn't find the capital for " + country + ".");
return Map.of("result", result); // Tools must return a Map
}
// Add the tool to the agent
FunctionTool capitalTool = FunctionTool.create(experiment.getClass(), "getCapitalCity");
LlmAgent capitalAgent =
LlmAgent.builder()
.model("gemini-2.0-flash")
.name("capital_agent")
.description("Answers user questions about the capital city of a given country.")
.instruction("You are an agent that provides the capital city of a country... (previous instruction text)")
.tools(capitalTool) // Provide the function wrapped as a FunctionTool
.build();
想進一步了解工具(Tools),請參見 Tools 章節。
進階設定與控制¶
除了核心參數外,LlmAgent 還提供多種選項,讓你能更細緻地進行控制:
設定大型語言模型 (LLM) 生成行為(generate_content_config)¶
你可以透過 generate_content_config,調整底層大型語言模型 (LLM) 的回應生成方式。
generate_content_config(可選): 傳入google.genai.types.GenerateContentConfig的實例,以控制像是temperature(隨機性)、max_output_tokens(回應長度)、top_p、top_k以及安全性設定等參數。
from google.genai import types
agent = LlmAgent(
# ... other params
generate_content_config=types.GenerateContentConfig(
temperature=0.2, # More deterministic output
max_output_tokens=250,
safety_settings=[
types.SafetySetting(
category=types.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
threshold=types.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
)
]
)
)
結構化資料(input_schema、output_schema、output_key)¶
針對需要與LLM Agent進行結構化資料交換的情境,Agent Development Kit (ADK) 提供了機制,讓你可以透過 schema 定義預期的輸入與輸出格式。
-
input_schema(選填): 定義一個 schema,描述預期的輸入結構。如果設置此項,傳遞給此 agent 的使用者訊息內容 必須 是符合該 schema 的 JSON 字串。你的指令應引導使用者或前一個 agent 按此格式提供資料。 -
output_schema(選填): 定義一個 schema,描述期望的輸出結構。如果設置此項,agent 的最終回應 必須 是符合該 schema 的 JSON 字串。 -
output_key(選填): 提供一個字串鍵值。如果設置此項,agent 最終 回應的文字內容將自動儲存到 session state 字典中對應的鍵值下。這對於在多個 agent 或工作流程步驟間傳遞結果非常有用。- 在 Python 中,這可能如下所示:
session.state[output_key] = agent_response_text - 在 Java 中:
session.state().put(outputKey, agentResponseText)
- 在 Python 中,這可能如下所示:
The input and output schema is typically a Pydantic BaseModel.
from pydantic import BaseModel, Field
class CapitalOutput(BaseModel):
capital: str = Field(description="The capital of the country.")
structured_capital_agent = LlmAgent(
# ... name, model, description
instruction="""You are a Capital Information Agent. Given a country, respond ONLY with a JSON object containing the capital. Format: {"capital": "capital_name"}""",
output_schema=CapitalOutput, # Enforce JSON output
output_key="found_capital" # Store result in state['found_capital']
# Cannot use tools=[get_capital_city] effectively here
)
The input and output schema is a google.genai.types.Schema object.
private static final Schema CAPITAL_OUTPUT =
Schema.builder()
.type("OBJECT")
.description("Schema for capital city information.")
.properties(
Map.of(
"capital",
Schema.builder()
.type("STRING")
.description("The capital city of the country.")
.build()))
.build();
LlmAgent structuredCapitalAgent =
LlmAgent.builder()
// ... name, model, description
.instruction(
"You are a Capital Information Agent. Given a country, respond ONLY with a JSON object containing the capital. Format: {\"capital\": \"capital_name\"}")
.outputSchema(capitalOutput) // Enforce JSON output
.outputKey("found_capital") // Store result in state.get("found_capital")
// Cannot use tools(getCapitalCity) effectively here
.build();
管理上下文(include_contents)¶
控制 agent 是否接收先前的對話歷史紀錄。
include_contents(選填,預設值:'default'): 決定是否將contents(歷史紀錄)傳送給大型語言模型 (LLM)。'default':agent 會接收相關的對話歷史紀錄。'none':agent 不會接收任何先前的contents。它僅根據目前的指令以及本次回合所提供的輸入進行操作(適用於無狀態任務或需強制指定上下文時)。
規劃器(Planner)¶
planner(選用): 指定一個 BasePlanner 實例,可在執行前啟用多步推理與規劃。目前有兩種主要的規劃器:
-
BuiltInPlanner: 利用模型內建的規劃能力(例如 Gemini 的思考功能)。詳情與範例請參見 Gemini Thinking。在此,
thinking_budget參數用於引導模型在生成回應時應使用多少思考 token。include_thoughts參數則控制模型是否應在回應中包含其原始思考內容與內部推理過程。 -
PlanReActPlanner: 此規劃器會指示模型在輸出時遵循特定結構:首先建立計劃,然後執行動作(例如呼叫 tools),並為每個步驟提供推理說明。這對於沒有內建「思考」功能的模型特別有用。from google.adk import Agent from google.adk.planners import PlanReActPlanner my_agent = Agent( model="gemini-2.0-flash", planner=PlanReActPlanner(), # ... your tools here )代理(agent)的回應將遵循結構化格式:
[user]: ai news [google_search_agent]: /*PLANNING*/ 1. Perform a Google search for "latest AI news" to get current updates and headlines related to artificial intelligence. 2. Synthesize the information from the search results to provide a summary of recent AI news. /*ACTION*/ /*REASONING*/ The search results provide a comprehensive overview of recent AI news, covering various aspects like company developments, research breakthroughs, and applications. I have enough information to answer the user's request. /*FINAL_ANSWER*/ Here's a summary of recent AI news: ....
程式碼執行¶
code_executor(可選): 提供BaseCodeExecutor實例,讓 agent 能夠執行大型語言模型 (LLM) 回應中的程式碼區塊。(請參閱 Tools/Built-in tools)
使用 built-in-planner 的範例:
from dotenv import load_dotenv
import asyncio
import os
from google.genai import types
from google.adk.agents.llm_agent import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.adk.artifacts.in_memory_artifact_service import InMemoryArtifactService # Optional
from google.adk.planners import BasePlanner, BuiltInPlanner, PlanReActPlanner
from google.adk.models import LlmRequest
from google.genai.types import ThinkingConfig
from google.genai.types import GenerateContentConfig
import datetime
from zoneinfo import ZoneInfo
APP_NAME = "weather_app"
USER_ID = "1234"
SESSION_ID = "session1234"
def get_weather(city: str) -> dict:
"""Retrieves the current weather report for a specified city.
Args:
city (str): The name of the city for which to retrieve the weather report.
Returns:
dict: status and result or error msg.
"""
if city.lower() == "new york":
return {
"status": "success",
"report": (
"The weather in New York is sunny with a temperature of 25 degrees"
" Celsius (77 degrees Fahrenheit)."
),
}
else:
return {
"status": "error",
"error_message": f"Weather information for '{city}' is not available.",
}
def get_current_time(city: str) -> dict:
"""Returns the current time in a specified city.
Args:
city (str): The name of the city for which to retrieve the current time.
Returns:
dict: status and result or error msg.
"""
if city.lower() == "new york":
tz_identifier = "America/New_York"
else:
return {
"status": "error",
"error_message": (
f"Sorry, I don't have timezone information for {city}."
),
}
tz = ZoneInfo(tz_identifier)
now = datetime.datetime.now(tz)
report = (
f'The current time in {city} is {now.strftime("%Y-%m-%d %H:%M:%S %Z%z")}'
)
return {"status": "success", "report": report}
# Step 1: Create a ThinkingConfig
thinking_config = ThinkingConfig(
include_thoughts=True, # Ask the model to include its thoughts in the response
thinking_budget=256 # Limit the 'thinking' to 256 tokens (adjust as needed)
)
print("ThinkingConfig:", thinking_config)
# Step 2: Instantiate BuiltInPlanner
planner = BuiltInPlanner(
thinking_config=thinking_config
)
print("BuiltInPlanner created.")
# Step 3: Wrap the planner in an LlmAgent
agent = LlmAgent(
model="gemini-2.5-pro-preview-03-25", # Set your model name
name="weather_and_time_agent",
instruction="You are an agent that returns time and weather",
planner=planner,
tools=[get_weather, get_current_time]
)
# Session and Runner
session_service = InMemorySessionService()
session = session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID)
runner = Runner(agent=agent, app_name=APP_NAME, session_service=session_service)
# Agent Interaction
def call_agent(query):
content = types.Content(role='user', parts=[types.Part(text=query)])
events = runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=content)
for event in events:
print(f"\nDEBUG EVENT: {event}\n")
if event.is_final_response() and event.content:
final_answer = event.content.parts[0].text.strip()
print("\n🟢 FINAL ANSWER\n", final_answer, "\n")
call_agent("If it's raining in New York right now, what is the current temperature?")
綜合應用:範例¶
程式碼
以下是完整的基本 capital_agent:
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --- Full example code demonstrating LlmAgent with Tools vs. Output Schema ---
import json # Needed for pretty printing dicts
import asyncio
from google.adk.agents import LlmAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types
from pydantic import BaseModel, Field
# --- 1. Define Constants ---
APP_NAME = "agent_comparison_app"
USER_ID = "test_user_456"
SESSION_ID_TOOL_AGENT = "session_tool_agent_xyz"
SESSION_ID_SCHEMA_AGENT = "session_schema_agent_xyz"
MODEL_NAME = "gemini-2.0-flash"
# --- 2. Define Schemas ---
# Input schema used by both agents
class CountryInput(BaseModel):
country: str = Field(description="The country to get information about.")
# Output schema ONLY for the second agent
class CapitalInfoOutput(BaseModel):
capital: str = Field(description="The capital city of the country.")
# Note: Population is illustrative; the LLM will infer or estimate this
# as it cannot use tools when output_schema is set.
population_estimate: str = Field(description="An estimated population of the capital city.")
# --- 3. Define the Tool (Only for the first agent) ---
def get_capital_city(country: str) -> str:
"""Retrieves the capital city of a given country."""
print(f"\n-- Tool Call: get_capital_city(country='{country}') --")
country_capitals = {
"united states": "Washington, D.C.",
"canada": "Ottawa",
"france": "Paris",
"japan": "Tokyo",
}
result = country_capitals.get(country.lower(), f"Sorry, I couldn't find the capital for {country}.")
print(f"-- Tool Result: '{result}' --")
return result
# --- 4. Configure Agents ---
# Agent 1: Uses a tool and output_key
capital_agent_with_tool = LlmAgent(
model=MODEL_NAME,
name="capital_agent_tool",
description="Retrieves the capital city using a specific tool.",
instruction="""You are a helpful agent that provides the capital city of a country using a tool.
The user will provide the country name in a JSON format like {"country": "country_name"}.
1. Extract the country name.
2. Use the `get_capital_city` tool to find the capital.
3. Respond clearly to the user, stating the capital city found by the tool.
""",
tools=[get_capital_city],
input_schema=CountryInput,
output_key="capital_tool_result", # Store final text response
)
# Agent 2: Uses output_schema (NO tools possible)
structured_info_agent_schema = LlmAgent(
model=MODEL_NAME,
name="structured_info_agent_schema",
description="Provides capital and estimated population in a specific JSON format.",
instruction=f"""You are an agent that provides country information.
The user will provide the country name in a JSON format like {{"country": "country_name"}}.
Respond ONLY with a JSON object matching this exact schema:
{json.dumps(CapitalInfoOutput.model_json_schema(), indent=2)}
Use your knowledge to determine the capital and estimate the population. Do not use any tools.
""",
# *** NO tools parameter here - using output_schema prevents tool use ***
input_schema=CountryInput,
output_schema=CapitalInfoOutput, # Enforce JSON output structure
output_key="structured_info_result", # Store final JSON response
)
# --- 5. Set up Session Management and Runners ---
session_service = InMemorySessionService()
# Create a runner for EACH agent
capital_runner = Runner(
agent=capital_agent_with_tool,
app_name=APP_NAME,
session_service=session_service
)
structured_runner = Runner(
agent=structured_info_agent_schema,
app_name=APP_NAME,
session_service=session_service
)
# --- 6. Define Agent Interaction Logic ---
async def call_agent_and_print(
runner_instance: Runner,
agent_instance: LlmAgent,
session_id: str,
query_json: str
):
"""Sends a query to the specified agent/runner and prints results."""
print(f"\n>>> Calling Agent: '{agent_instance.name}' | Query: {query_json}")
user_content = types.Content(role='user', parts=[types.Part(text=query_json)])
final_response_content = "No final response received."
async for event in runner_instance.run_async(user_id=USER_ID, session_id=session_id, new_message=user_content):
# print(f"Event: {event.type}, Author: {event.author}") # Uncomment for detailed logging
if event.is_final_response() and event.content and event.content.parts:
# For output_schema, the content is the JSON string itself
final_response_content = event.content.parts[0].text
print(f"<<< Agent '{agent_instance.name}' Response: {final_response_content}")
current_session = await session_service.get_session(app_name=APP_NAME,
user_id=USER_ID,
session_id=session_id)
stored_output = current_session.state.get(agent_instance.output_key)
# Pretty print if the stored output looks like JSON (likely from output_schema)
print(f"--- Session State ['{agent_instance.output_key}']: ", end="")
try:
# Attempt to parse and pretty print if it's JSON
parsed_output = json.loads(stored_output)
print(json.dumps(parsed_output, indent=2))
except (json.JSONDecodeError, TypeError):
# Otherwise, print as string
print(stored_output)
print("-" * 30)
# --- 7. Run Interactions ---
async def main():
# Create separate sessions for clarity, though not strictly necessary if context is managed
print("--- Creating Sessions ---")
await session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID_TOOL_AGENT)
await session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID_SCHEMA_AGENT)
print("--- Testing Agent with Tool ---")
await call_agent_and_print(capital_runner, capital_agent_with_tool, SESSION_ID_TOOL_AGENT, '{"country": "France"}')
await call_agent_and_print(capital_runner, capital_agent_with_tool, SESSION_ID_TOOL_AGENT, '{"country": "Canada"}')
print("\n\n--- Testing Agent with Output Schema (No Tool Use) ---")
await call_agent_and_print(structured_runner, structured_info_agent_schema, SESSION_ID_SCHEMA_AGENT, '{"country": "France"}')
await call_agent_and_print(structured_runner, structured_info_agent_schema, SESSION_ID_SCHEMA_AGENT, '{"country": "Japan"}')
# --- Run the Agent ---
# Note: In Colab, you can directly use 'await' at the top level.
# If running this code as a standalone Python script, you'll need to use asyncio.run() or manage the event loop.
if __name__ == "__main__":
asyncio.run(main())
// --- Full example code demonstrating LlmAgent with Tools vs. Output Schema ---
import com.google.adk.agents.LlmAgent;
import com.google.adk.events.Event;
import com.google.adk.runner.Runner;
import com.google.adk.sessions.InMemorySessionService;
import com.google.adk.sessions.Session;
import com.google.adk.tools.Annotations;
import com.google.adk.tools.FunctionTool;
import com.google.genai.types.Content;
import com.google.genai.types.Part;
import com.google.genai.types.Schema;
import io.reactivex.rxjava3.core.Flowable;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
public class LlmAgentExample {
// --- 1. Define Constants ---
private static final String MODEL_NAME = "gemini-2.0-flash";
private static final String APP_NAME = "capital_agent_tool";
private static final String USER_ID = "test_user_456";
private static final String SESSION_ID_TOOL_AGENT = "session_tool_agent_xyz";
private static final String SESSION_ID_SCHEMA_AGENT = "session_schema_agent_xyz";
// --- 2. Define Schemas ---
// Input schema used by both agents
private static final Schema COUNTRY_INPUT_SCHEMA =
Schema.builder()
.type("OBJECT")
.description("Input for specifying a country.")
.properties(
Map.of(
"country",
Schema.builder()
.type("STRING")
.description("The country to get information about.")
.build()))
.required(List.of("country"))
.build();
// Output schema ONLY for the second agent
private static final Schema CAPITAL_INFO_OUTPUT_SCHEMA =
Schema.builder()
.type("OBJECT")
.description("Schema for capital city information.")
.properties(
Map.of(
"capital",
Schema.builder()
.type("STRING")
.description("The capital city of the country.")
.build(),
"population_estimate",
Schema.builder()
.type("STRING")
.description("An estimated population of the capital city.")
.build()))
.required(List.of("capital", "population_estimate"))
.build();
// --- 3. Define the Tool (Only for the first agent) ---
// Retrieves the capital city of a given country.
public static Map<String, Object> getCapitalCity(
@Annotations.Schema(name = "country", description = "The country to get capital for")
String country) {
System.out.printf("%n-- Tool Call: getCapitalCity(country='%s') --%n", country);
Map<String, String> countryCapitals = new HashMap<>();
countryCapitals.put("united states", "Washington, D.C.");
countryCapitals.put("canada", "Ottawa");
countryCapitals.put("france", "Paris");
countryCapitals.put("japan", "Tokyo");
String result =
countryCapitals.getOrDefault(
country.toLowerCase(), "Sorry, I couldn't find the capital for " + country + ".");
System.out.printf("-- Tool Result: '%s' --%n", result);
return Map.of("result", result); // Tools must return a Map
}
public static void main(String[] args){
LlmAgentExample agentExample = new LlmAgentExample();
FunctionTool capitalTool = FunctionTool.create(agentExample.getClass(), "getCapitalCity");
// --- 4. Configure Agents ---
// Agent 1: Uses a tool and output_key
LlmAgent capitalAgentWithTool =
LlmAgent.builder()
.model(MODEL_NAME)
.name("capital_agent_tool")
.description("Retrieves the capital city using a specific tool.")
.instruction(
"""
You are a helpful agent that provides the capital city of a country using a tool.
1. Extract the country name.
2. Use the `get_capital_city` tool to find the capital.
3. Respond clearly to the user, stating the capital city found by the tool.
""")
.tools(capitalTool)
.inputSchema(COUNTRY_INPUT_SCHEMA)
.outputKey("capital_tool_result") // Store final text response
.build();
// Agent 2: Uses an output schema
LlmAgent structuredInfoAgentSchema =
LlmAgent.builder()
.model(MODEL_NAME)
.name("structured_info_agent_schema")
.description("Provides capital and estimated population in a specific JSON format.")
.instruction(
String.format("""
You are an agent that provides country information.
Respond ONLY with a JSON object matching this exact schema: %s
Use your knowledge to determine the capital and estimate the population. Do not use any tools.
""", CAPITAL_INFO_OUTPUT_SCHEMA.toJson()))
// *** NO tools parameter here - using output_schema prevents tool use ***
.inputSchema(COUNTRY_INPUT_SCHEMA)
.outputSchema(CAPITAL_INFO_OUTPUT_SCHEMA) // Enforce JSON output structure
.outputKey("structured_info_result") // Store final JSON response
.build();
// --- 5. Set up Session Management and Runners ---
InMemorySessionService sessionService = new InMemorySessionService();
sessionService.createSession(APP_NAME, USER_ID, null, SESSION_ID_TOOL_AGENT).blockingGet();
sessionService.createSession(APP_NAME, USER_ID, null, SESSION_ID_SCHEMA_AGENT).blockingGet();
Runner capitalRunner = new Runner(capitalAgentWithTool, APP_NAME, null, sessionService);
Runner structuredRunner = new Runner(structuredInfoAgentSchema, APP_NAME, null, sessionService);
// --- 6. Run Interactions ---
System.out.println("--- Testing Agent with Tool ---");
agentExample.callAgentAndPrint(
capitalRunner, capitalAgentWithTool, SESSION_ID_TOOL_AGENT, "{\"country\": \"France\"}");
agentExample.callAgentAndPrint(
capitalRunner, capitalAgentWithTool, SESSION_ID_TOOL_AGENT, "{\"country\": \"Canada\"}");
System.out.println("\n\n--- Testing Agent with Output Schema (No Tool Use) ---");
agentExample.callAgentAndPrint(
structuredRunner,
structuredInfoAgentSchema,
SESSION_ID_SCHEMA_AGENT,
"{\"country\": \"France\"}");
agentExample.callAgentAndPrint(
structuredRunner,
structuredInfoAgentSchema,
SESSION_ID_SCHEMA_AGENT,
"{\"country\": \"Japan\"}");
}
// --- 7. Define Agent Interaction Logic ---
public void callAgentAndPrint(Runner runner, LlmAgent agent, String sessionId, String queryJson) {
System.out.printf(
"%n>>> Calling Agent: '%s' | Session: '%s' | Query: %s%n",
agent.name(), sessionId, queryJson);
Content userContent = Content.fromParts(Part.fromText(queryJson));
final String[] finalResponseContent = {"No final response received."};
Flowable<Event> eventStream = runner.runAsync(USER_ID, sessionId, userContent);
// Stream event response
eventStream.blockingForEach(event -> {
if (event.finalResponse() && event.content().isPresent()) {
event
.content()
.get()
.parts()
.flatMap(parts -> parts.isEmpty() ? Optional.empty() : Optional.of(parts.get(0)))
.flatMap(Part::text)
.ifPresent(text -> finalResponseContent[0] = text);
}
});
System.out.printf("<<< Agent '%s' Response: %s%n", agent.name(), finalResponseContent[0]);
// Retrieve the session again to get the updated state
Session updatedSession =
runner
.sessionService()
.getSession(APP_NAME, USER_ID, sessionId, Optional.empty())
.blockingGet();
if (updatedSession != null && agent.outputKey().isPresent()) {
// Print to verify if the stored output looks like JSON (likely from output_schema)
System.out.printf("--- Session State ['%s']: ", agent.outputKey().get());
}
}
}
(本範例展示了核心概念。更複雜的 agent 可能會結合 schema、內容控制、規劃等功能。)
相關概念(延伸主題)¶
本頁說明了 LlmAgent 的核心設定,其他相關概念則能提供更進階的控制,並於其他頁面詳細介紹:
- Callbacks: 透過
before_model_callback、after_model_callback等方式攔截執行點(模型呼叫前/後、工具呼叫前/後)。請參閱 Callbacks。 - 多 agent 控制(Multi-Agent Control): 進階的 agent 互動策略,包括規劃(
planner)、agent 轉移控制(disallow_transfer_to_parent、disallow_transfer_to_peers)、以及系統層級指令(global_instruction)。請參閱 Multi-Agents。