使用 Gemini API 呼叫函式

透過函式呼叫,您可以將模型連結至外部工具和 API。 模型不會生成文字回覆,而是判斷何時應呼叫特定函式,並提供執行實際動作所需的參數。這可讓模型成為自然語言與實際行動和資料之間的橋樑。函式呼叫功能有 3 個主要用途:

  • 擴增知識:存取外部來源的資訊,例如資料庫、API 和知識庫。
  • 擴充功能:使用外部工具執行運算,並擴充模型限制,例如使用計算機或建立圖表。
  • 採取行動:使用 API 與外部系統互動,例如安排預約、建立發票、傳送電子郵件或控制智慧住宅裝置。

函式呼叫的運作方式

函式呼叫功能總覽

函式呼叫是指應用程式、模型和外部函式之間的結構化互動。以下說明程序中的各個環節:

  1. 定義函式宣告:在應用程式程式碼中定義函式宣告。函式宣告會向模型說明函式的名稱、參數和用途。
  2. 使用函式宣告呼叫 LLM:將使用者提示連同函式宣告傳送至模型。這項功能會分析要求,判斷呼叫函式是否有幫助。如果可以,模型會以結構化 JSON 物件回應。
  3. 執行函式程式碼 (您的責任):模型不會自行執行函式,應用程式有責任處理回應並檢查函式呼叫,如果
    • :擷取函式的名稱和引數,並在應用程式中執行對應的函式。
    • 否:模型已直接提供提示的文字回應 (範例中較不強調這個流程,但這是可能的結果)。
  4. 建立易於理解的回覆:如果執行了函式,請擷取結果,並在後續的對話輪次中傳回模型。接著,Gemini 會根據結果生成最終回覆,並納入函式呼叫中的資訊。

這個程序可以重複多輪,實現複雜的互動和工作流程。模型也支援在單一回合中呼叫多個函式 (平行函式呼叫),以及依序呼叫 (組合函式呼叫)。

步驟 1:定義函式宣告

在應用程式碼中定義函式及其宣告,讓使用者設定燈光值並發出 API 要求。這個函式可能會呼叫外部服務或 API。

Python

# Define a function that the model can call to control smart lights
set_light_values_declaration = {
    "name": "set_light_values",
    "description": "Sets the brightness and color temperature of a light.",
    "parameters": {
        "type": "object",
        "properties": {
            "brightness": {
                "type": "integer",
                "description": "Light level from 0 to 100. Zero is off and 100 is full brightness",
            },
            "color_temp": {
                "type": "string",
                "enum": ["daylight", "cool", "warm"],
                "description": "Color temperature of the light fixture, which can be `daylight`, `cool` or `warm`.",
            },
        },
        "required": ["brightness", "color_temp"],
    },
}

# This is the actual function that would be called based on the model's suggestion
def set_light_values(brightness: int, color_temp: str) -> dict[str, int | str]:
    """Set the brightness and color temperature of a room light. (mock API).

    Args:
        brightness: Light level from 0 to 100. Zero is off and 100 is full brightness
        color_temp: Color temperature of the light fixture, which can be `daylight`, `cool` or `warm`.

    Returns:
        A dictionary containing the set brightness and color temperature.
    """
    return {"brightness": brightness, "colorTemperature": color_temp}

JavaScript

import { Type } from '@google/genai';

// Define a function that the model can call to control smart lights
const setLightValuesFunctionDeclaration = {
  name: 'set_light_values',
  description: 'Sets the brightness and color temperature of a light.',
  parameters: {
    type: Type.OBJECT,
    properties: {
      brightness: {
        type: Type.NUMBER,
        description: 'Light level from 0 to 100. Zero is off and 100 is full brightness',
      },
      color_temp: {
        type: Type.STRING,
        enum: ['daylight', 'cool', 'warm'],
        description: 'Color temperature of the light fixture, which can be `daylight`, `cool` or `warm`.',
      },
    },
    required: ['brightness', 'color_temp'],
  },
};

/**

*   Set the brightness and color temperature of a room light. (mock API)
*   @param {number} brightness - Light level from 0 to 100. Zero is off and 100 is full brightness
*   @param {string} color_temp - Color temperature of the light fixture, which can be `daylight`, `cool` or `warm`.
*   @return {Object} A dictionary containing the set brightness and color temperature.
*/
function setLightValues(brightness, color_temp) {
  return {
    brightness: brightness,
    colorTemperature: color_temp
  };
}

步驟 2:使用函式宣告呼叫模型

定義函式宣告後,您可以提示模型使用這些函式。這項功能會分析提示和函式宣告,然後決定要直接回應還是呼叫函式。如果呼叫函式,回應物件會包含函式呼叫建議。

Python

from google.genai import types

# Configure the client and tools
client = genai.Client()
tools = types.Tool(function_declarations=[set_light_values_declaration])
config = types.GenerateContentConfig(tools=[tools])

# Define user prompt
contents = [
    types.Content(
        role="user", parts=[types.Part(text="Turn the lights down to a romantic level")]
    )
]

# Send request with function declarations
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents=contents
    config=config,
)

print(response.candidates[0].content.parts[0].function_call)

JavaScript

import { GoogleGenAI } from '@google/genai';

// Generation config with function declaration
const config = {
  tools: [{
    functionDeclarations: [setLightValuesFunctionDeclaration]
  }]
};

// Configure the client
const ai = new GoogleGenAI({});

// Define user prompt
const contents = [
  {
    role: 'user',
    parts: [{ text: 'Turn the lights down to a romantic level' }]
  }
];

// Send request with function declarations
const response = await ai.models.generateContent({
  model: 'gemini-2.5-flash',
  contents: contents,
  config: config
});

console.log(response.functionCalls[0]);

接著,模型會以 OpenAPI 相容的結構定義傳回 functionCall 物件,指定如何呼叫一或多個已宣告的函式,以便回覆使用者的問題。

Python

id=None args={'color_temp': 'warm', 'brightness': 25} name='set_light_values'

JavaScript

{
  name: 'set_light_values',
  args: { brightness: 25, color_temp: 'warm' }
}

步驟 3:執行 set_light_values 函式程式碼

從模型的回應中擷取函式呼叫詳細資料、剖析引數,然後執行 set_light_values 函式。

Python

# Extract tool call details, it may not be in the first part.
tool_call = response.candidates[0].content.parts[0].function_call

if tool_call.name == "set_light_values":
    result = set_light_values(**tool_call.args)
    print(f"Function execution result: {result}")

JavaScript

// Extract tool call details
const tool_call = response.functionCalls[0]

let result;
if (tool_call.name === 'set_light_values') {
  result = setLightValues(tool_call.args.brightness, tool_call.args.color_temp);
  console.log(`Function execution result: ${JSON.stringify(result)}`);
}

步驟 4:使用函式結果建立易於理解的回覆,然後再次呼叫模型

最後,將函式執行結果傳回模型,模型就能將這項資訊納入最終回應並傳送給使用者。

Python

# Create a function response part
function_response_part = types.Part.from_function_response(
    name=tool_call.name,
    response={"result": result},
)

# Append function call and result of the function execution to contents
contents.append(response.candidates[0].content) # Append the content from the model's response.
contents.append(types.Content(role="user", parts=[function_response_part])) # Append the function response

final_response = client.models.generate_content(
    model="gemini-2.5-flash",
    config=config,
    contents=contents,
)

print(final_response.text)

JavaScript

// Create a function response part
const function_response_part = {
  name: tool_call.name,
  response: { result }
}

// Append function call and result of the function execution to contents
contents.push(response.candidates[0].content);
contents.push({ role: 'user', parts: [{ functionResponse: function_response_part }] });

// Get the final response from the model
const final_response = await ai.models.generateContent({
  model: 'gemini-2.5-flash',
  contents: contents,
  config: config
});

console.log(final_response.text);

這樣就完成函式呼叫流程。模型已成功使用 set_light_values 函式執行使用者的要求動作。

函式宣告

在提示中導入函式呼叫時,您會建立 tools 物件,其中包含一或多個 function declarations。您可以使用 JSON 定義函式,具體來說,就是使用OpenAPI 結構定義格式的選取子集。單一函式宣告可包含下列參數:

  • name (字串):函式的專屬名稱 (get_weather_forecastsend_email)。請使用描述性名稱,且不得包含空格或特殊字元 (請使用底線或駝峰式大小寫)。
  • description (字串):清楚詳細地說明函式的用途和功能。這對模型瞭解何時使用函式至關重要。請盡量具體,並視需要提供範例 (「根據地點尋找電影院,並可選擇性地提供目前正在電影院上映的電影名稱。」)。
  • parameters (物件):定義函式預期的輸入參數。
    • type (字串):指定整體資料類型,例如 object
    • properties (物件):列出個別參數,每個參數都包含:
      • type (字串):參數的資料類型,例如 stringintegerboolean, array
      • description (字串):參數用途和格式的說明。提供範例和限制 (「城市和州,例如「加州舊金山」或郵遞區號 (例如 '95616'」。
      • enum (陣列,選用):如果參數值來自固定集合,請使用「enum」列出允許的值,而不是只在說明中描述這些值。這可提升準確度 (「enum」:[「daylight」、「cool」、「warm」])。
    • required (陣列):字串陣列,列出函式運作時必須提供的參數名稱。

函式呼叫 (含思考)

啟用「思考」功能後,模型會先推理要求,再建議呼叫函式,因此可提升函式呼叫效能。

不過,由於 Gemini API 是無狀態的,因此系統會在回合之間遺失這個推理脈絡,這可能會降低函式呼叫的品質,因為函式呼叫需要多個回合的要求。

如要保留這個脈絡,可以使用想法簽章。「想法簽章」是模型內部思考過程的加密表示法,您會在後續回合中傳回給模型。

如何使用想法簽名:

  1. 接收簽章:啟用思考功能後,API 回應會包含 thought_signature 欄位,其中含有模型推理過程的加密表示法。
  2. 傳回簽章:將函式執行結果傳回伺服器時,請一併傳送您收到的 thought_signature。

模型就能還原先前的思考脈絡,進而提升函式呼叫成效。

從伺服器接收簽章

簽章會在模型思考階段後傳回,通常是文字或函式呼叫。

以下是針對「太浩湖的天氣如何?」要求,使用「Get Weather」範例時,在各類型部分中傳回的思維簽章範例:

文字部分

[{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": "Here's what the weather in Lake Tahoe is today",
            "thoughtSignature": "ClcBVKhc7ru7KzUI7SrdUoIdAYLm/+i93aHjfIt4xHyAoO/G70tApxnK2ujBhOhC1PrRy1pkQa88fqFvpHNVd1HDjNLO7mkp6/hFwE+SPPEB3fh0hs4oM8MKhgIBVKhc7uIGvrS7i/T4HpfbnYrluFfWNjZ62gewqe4cVdR/Dlh+zbjtYmDD0gPZ+SuBO7vvHQdzsjePRP+2Y5XddX6LEf/cGGgakq8EhVvw/a6IVzUO6XmpHg2Ag1sl8E9+VFH/lC0R0ZuYdFWligtDuYwp5p5q3o59G0TtWeU2MC1y2MJfE9u/KWd313ldka80/X2W/xF2O/4djMp5G2WKcULfve75zeRCy0mc5iS3SB9mTH0cT6x0vtKjeBx50gcg+CQWtJcRuwTVzz54dmvmK9xvnqA8gKGw3DuaM9wfy5hyY7Qg0z3iyyWdP8T/lbjKim8IEQOk7O1vVwP1Ko7oMYH8JgA1CsoBAVSoXO6v4c5RSyd1cn6EIU0pEFQsjW7rYWPuZdOFq/tsGJT9BCfW7KGkPGwlNSq8jTJFvbcJ/DjtndISQYXwiXd2kGa5JfdS2Kh4zOxCxiWtOk+2nCc3+XQk2nonhO+esGJpkDdbbHZSqRgcUtYKq7q28iPFOQvOFyCiZNB7K86Z/6Hnagu2snSlN/BcTMaFGaWpcCClSUo4foRZn3WbNCoM8rcpD7qEJMp4a5baaSxyyeL1ZTGd2HLpFys/oiW6e3oAnhxuIysCwg=="
          }
        ],
        "role": "model"
      },
      "index": 0
    }
  ],
  # Remainder of response...

函式呼叫部分

[{
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "functionCall": {
              "name": "getWeather",
              "args": {
                "city": "Lake Tahoe"
              }
            },
            "thoughtSignature": "CiwBVKhc7nRyTi3HmggPD9iQiRc261f5jwuMdw3H/itDH0emsb9ZVo3Nwx9p6wpsAVSoXO5i8fDV4jBSBLoaWxB5zUdlGY6aIGp+I0oEnwRRSRQ1LOvrDlojEH8JE8HjiKXALdJrvNPiG+HY3GZEO8pZjEZtc3UoBUh7+SVyjK7Xolu7aRYYeUyzrCapoETWypER1jbrJXnFV23hCosBAVSoXO6oIPNJSmbuEDfGafOhuCSHkpr1yjTp35RXYqmCESzRzWf5+nFXLqncqeFo4ohoxbiYQVpVQbOZF81p8o9zg6xeRE7qMeOv+XN7enXGJ4/s3qNFQpfkSMqRdBITN1VpX7jyfEAjvxBNc7PDfDJZmEPY338ZIY5nFFcmzJSWjVrboFt2sMFv+A=="
          }
        ],
        "role": "model"
      },
      "finishReason": "STOP",
      "index": 0
    }
  ],
  # Remainder of response...

您可以使用下列程式碼確認是否收到簽章,並查看簽章的樣子:

# Step 2: Call the model with function declarations
# ...Generation config, Configure the client, and Define user prompt (No changes)

# Send request with declarations (using a thinking model)
response = client.models.generate_content(
  model="gemini-2.5-flash", config=config, contents=contents)

# See thought signatures
for part in response.candidates[0].content.parts:
  if part.thought_signature:
    print("Thought signature:")
    print(part.thought_signature)

將簽章傳回伺服器

如要還原簽名,請按照下列步驟操作:

  • 您應將簽章連同所含部分傳回伺服器
  • 請勿將含有簽名的部分與另一個也含有簽名的部分合併。簽章字串無法串連
  • 請勿將有簽名的部分與沒有簽名的部分合併。這會破壞簽章所代表想法的正確位置。

程式碼與上一節步驟 4 中的程式碼相同。 但在這個情況下 (如下方註解所示),您會將簽章連同函式執行結果一併回傳給模型,因此模型可以將想法納入最終回覆:

Python

# Step 4: Create user friendly response with function result and call the model again
# ...Create a function response part (No change)

# Append thought signatures, function call and result of the function execution to contents
function_call_content = response.candidates[0].content
# Append the model's function call message, which includes thought signatures
contents.append(function_call_content)
contents.append(types.Content(role="user", parts=[function_response_part])) # Append the function response

final_response = client.models.generate_content(
    model="gemini-2.5-flash",
    config=config,
    contents=contents,
)

print(final_response.text)

JavaScript

// Step 4: Create user friendly response with function result and call the model again
// ...Create a function response part (No change)

// Append thought signatures, function call and result of the function execution to contents
const function_response_content = response.candidates[0].content;
contents.push(function_response_content);
contents.push({ role: 'user', parts: [{ functionResponse: function_response_part }] });

const final_response = await ai.models.generateContent({
  model: 'gemini-2.5-flash',
  contents: contents,
  config: config
});

console.log(final_response.text);

以下顯示傳回想法簽章的要求範例:

[{
  "contents": [
    {
      "role": "user",
      "parts": [
        {
          "text": "what is the weather in Lake Tahoe?"
        }
      ]
    },
    {
      "parts": [
        {
          "functionCall": {
            "name": "getWeather",
            "args": {
              "city": "Lake Tahoe"
            }
          },
          "thoughtSignature": "CiIBVKhc7oDPpCaXyJKKssjqr4g3JNOSgJ/M2V+1THC1icsWCmwBVKhc7pBABbZ+zR3e9234WnWWS6GFXmf8IVwpnzjd5KYd7vyJbn/4vTorWBGayj/vbd9JPaZQjxdAIXhoE5mX/MDsQ7M9N/b0qJjHm39tYIBvS4sIWkMDHqTJqXGLzhhKtrTkfbV3RbaJEkQKmwEBVKhc7qVUgC3hfTXZLo9R3AJzUUIx50NKvJTb9B+UU+LBqgg7Nck1x5OpjWVS2R+SsveprIuYOruk2Y0H53J2OJF8qsxTdIq2si8DGW2V7WK8xyoJH5kbqd7drIw1jLb44b6lx4SMyB0VaULuTBki4d+Ljjg1tJTwR0IYMKqDLDZt9mheINsi0ZxcNjfpnDydRXdWbcSwzmK/wgqJAQFUqFzuKgNVElxs3cbO+xebr2IwcOro84nKTisi0tTp9bICPC9fTUhn3L+rvQWA+d3J1Za8at2bakrqiRj7BTh+CVO9fWQMAEQAs3ni0Z2hfaYG92tOD26E4IoZwyYEoWbfNudpH1fr5tEkyqnEGtWIh7H+XoZQ2DXeiOa+br7Zk88SrNE+trJMCogBAVSoXO5e9fBLg7hnbkmKsrzNLnQtLsQm1gNzjcjEC7nJYklYPp0KI2uGBE1PkM8XNsfllAfHVn7LzHcHNlbQ9pJ7QZTSIeG42goS971r5wNZwxaXwCTphClQh826eqJWo6A/28TtAVQWLhTx5ekbP7qb4nh1UblESZ1saxDQAEo4OKPbDzx5BgqKAQFUqFzuVyjNm5i0wN8hTDnKjfpDroEpPPTs531iFy9BOX+xDCdGHy8D+osFpaoBq6TFekQQbz4hIoUR1YEcP4zI80/cNimEeb9IcFxZTTxiNrbhbbcv0969DSMWhB+ZEqIz4vuw4GLe/xcUvqhlChQwFdgIbdOQHSHpatn5uDlktnP/bi26nKuXIwo0AVSoXO7US22OUH7d1f4abNPI0IyAvhqkPp12rbtWLx9vkOtojE8IP+xCfYtIFuZIzRNZqA=="
        }
      ],
      "role": "model"
    },
    {
      "role": "user",
      "parts": [
        {
          "functionResponse": {
            "name": "getWeather",
            "response": {
              "response": {
                "stringValue": "Sunny and hot. 90 degrees Fahrenheit"
              }
            }
          }
        }
      ]
    }
  ],
  # Remainder of request...

如要進一步瞭解思維簽章的限制和用途,以及一般思維模型,請參閱「思維」頁面。

平行函式呼叫

除了單次對話函式呼叫,您也可以一次呼叫多個函式。平行函式呼叫可讓您一次執行多個函式,適用於函式彼此不相依的情況。這在許多情況下都很有用,例如從多個獨立來源收集資料、從不同資料庫擷取顧客詳細資料、檢查各倉庫的庫存量,或執行多項動作,例如將公寓改造成迪斯可舞廳。

Python

power_disco_ball = {
    "name": "power_disco_ball",
    "description": "Powers the spinning disco ball.",
    "parameters": {
        "type": "object",
        "properties": {
            "power": {
                "type": "boolean",
                "description": "Whether to turn the disco ball on or off.",
            }
        },
        "required": ["power"],
    },
}

start_music = {
    "name": "start_music",
    "description": "Play some music matching the specified parameters.",
    "parameters": {
        "type": "object",
        "properties": {
            "energetic": {
                "type": "boolean",
                "description": "Whether the music is energetic or not.",
            },
            "loud": {
                "type": "boolean",
                "description": "Whether the music is loud or not.",
            },
        },
        "required": ["energetic", "loud"],
    },
}

dim_lights = {
    "name": "dim_lights",
    "description": "Dim the lights.",
    "parameters": {
        "type": "object",
        "properties": {
            "brightness": {
                "type": "number",
                "description": "The brightness of the lights, 0.0 is off, 1.0 is full.",
            }
        },
        "required": ["brightness"],
    },
}

JavaScript

import { Type } from '@google/genai';

const powerDiscoBall = {
  name: 'power_disco_ball',
  description: 'Powers the spinning disco ball.',
  parameters: {
    type: Type.OBJECT,
    properties: {
      power: {
        type: Type.BOOLEAN,
        description: 'Whether to turn the disco ball on or off.'
      }
    },
    required: ['power']
  }
};

const startMusic = {
  name: 'start_music',
  description: 'Play some music matching the specified parameters.',
  parameters: {
    type: Type.OBJECT,
    properties: {
      energetic: {
        type: Type.BOOLEAN,
        description: 'Whether the music is energetic or not.'
      },
      loud: {
        type: Type.BOOLEAN,
        description: 'Whether the music is loud or not.'
      }
    },
    required: ['energetic', 'loud']
  }
};

const dimLights = {
  name: 'dim_lights',
  description: 'Dim the lights.',
  parameters: {
    type: Type.OBJECT,
    properties: {
      brightness: {
        type: Type.NUMBER,
        description: 'The brightness of the lights, 0.0 is off, 1.0 is full.'
      }
    },
    required: ['brightness']
  }
};

設定函式呼叫模式,允許使用所有指定的工具。 如要瞭解詳情,請參閱設定函式呼叫

Python

from google import genai
from google.genai import types

# Configure the client and tools
client = genai.Client()
house_tools = [
    types.Tool(function_declarations=[power_disco_ball, start_music, dim_lights])
]
config = types.GenerateContentConfig(
    tools=house_tools,
    automatic_function_calling=types.AutomaticFunctionCallingConfig(
        disable=True
    ),
    # Force the model to call 'any' function, instead of chatting.
    tool_config=types.ToolConfig(
        function_calling_config=types.FunctionCallingConfig(mode='ANY')
    ),
)

chat = client.chats.create(model="gemini-2.5-flash", config=config)
response = chat.send_message("Turn this place into a party!")

# Print out each of the function calls requested from this single call
print("Example 1: Forced function calling")
for fn in response.function_calls:
    args = ", ".join(f"{key}={val}" for key, val in fn.args.items())
    print(f"{fn.name}({args})")

JavaScript

import { GoogleGenAI } from '@google/genai';

// Set up function declarations
const houseFns = [powerDiscoBall, startMusic, dimLights];

const config = {
    tools: [{
        functionDeclarations: houseFns
    }],
    // Force the model to call 'any' function, instead of chatting.
    toolConfig: {
        functionCallingConfig: {
            mode: 'any'
        }
    }
};

// Configure the client
const ai = new GoogleGenAI({});

// Create a chat session
const chat = ai.chats.create({
    model: 'gemini-2.5-flash',
    config: config
});
const response = await chat.sendMessage({message: 'Turn this place into a party!'});

// Print out each of the function calls requested from this single call
console.log("Example 1: Forced function calling");
for (const fn of response.functionCalls) {
    const args = Object.entries(fn.args)
        .map(([key, val]) => `${key}=${val}`)
        .join(', ');
    console.log(`${fn.name}(${args})`);
}

每個列印結果都反映了模型要求的單一函式呼叫。如要傳回結果,請按照要求順序加入回應。

Python SDK 支援自動呼叫函式,可自動將 Python 函式轉換為宣告,並為您處理函式呼叫執行和回應週期。以下是迪斯可用途的範例。

Python

from google import genai
from google.genai import types

# Actual function implementations
def power_disco_ball_impl(power: bool) -> dict:
    """Powers the spinning disco ball.

    Args:
        power: Whether to turn the disco ball on or off.

    Returns:
        A status dictionary indicating the current state.
    """
    return {"status": f"Disco ball powered {'on' if power else 'off'}"}

def start_music_impl(energetic: bool, loud: bool) -> dict:
    """Play some music matching the specified parameters.

    Args:
        energetic: Whether the music is energetic or not.
        loud: Whether the music is loud or not.

    Returns:
        A dictionary containing the music settings.
    """
    music_type = "energetic" if energetic else "chill"
    volume = "loud" if loud else "quiet"
    return {"music_type": music_type, "volume": volume}

def dim_lights_impl(brightness: float) -> dict:
    """Dim the lights.

    Args:
        brightness: The brightness of the lights, 0.0 is off, 1.0 is full.

    Returns:
        A dictionary containing the new brightness setting.
    """
    return {"brightness": brightness}

# Configure the client
client = genai.Client()
config = types.GenerateContentConfig(
    tools=[power_disco_ball_impl, start_music_impl, dim_lights_impl]
)

# Make the request
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="Do everything you need to this place into party!",
    config=config,
)

print("\nExample 2: Automatic function calling")
print(response.text)
# I've turned on the disco ball, started playing loud and energetic music, and dimmed the lights to 50% brightness. Let's get this party started!

組合式函式呼叫

組合或循序函式呼叫功能可讓 Gemini 串連多個函式呼叫,以滿足複雜要求。舉例來說,如要回答「取得我目前所在位置的溫度」,Gemini API 可能會先叫用 get_current_location() 函式,然後叫用 get_weather() 函式,並將位置資訊做為參數。

以下範例說明如何使用 Python SDK 和自動函式呼叫,實作組合函式呼叫。

Python

本範例使用 Python SDK 的自動呼叫函式功能。google-genaiSDK 會自動將 Python 函式轉換為必要結構定義,在模型要求時執行函式呼叫,並將結果傳回模型以完成工作。

import os
from google import genai
from google.genai import types

# Example Functions
def get_weather_forecast(location: str) -> dict:
    """Gets the current weather temperature for a given location."""
    print(f"Tool Call: get_weather_forecast(location={location})")
    # TODO: Make API call
    print("Tool Response: {'temperature': 25, 'unit': 'celsius'}")
    return {"temperature": 25, "unit": "celsius"}  # Dummy response

def set_thermostat_temperature(temperature: int) -> dict:
    """Sets the thermostat to a desired temperature."""
    print(f"Tool Call: set_thermostat_temperature(temperature={temperature})")
    # TODO: Interact with a thermostat API
    print("Tool Response: {'status': 'success'}")
    return {"status": "success"}

# Configure the client and model
client = genai.Client()
config = types.GenerateContentConfig(
    tools=[get_weather_forecast, set_thermostat_temperature]
)

# Make the request
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="If it's warmer than 20°C in London, set the thermostat to 20°C, otherwise set it to 18°C.",
    config=config,
)

# Print the final, user-facing response
print(response.text)

預期輸出內容

執行程式碼時,您會看到 SDK 協調函式呼叫。模型會先呼叫 get_weather_forecast,接收溫度,然後根據提示中的邏輯,使用正確的值呼叫 set_thermostat_temperature

Tool Call: get_weather_forecast(location=London)
Tool Response: {'temperature': 25, 'unit': 'celsius'}
Tool Call: set_thermostat_temperature(temperature=20)
Tool Response: {'status': 'success'}
OK. I've set the thermostat to 20°C.

JavaScript

這個範例說明如何使用 JavaScript/TypeScript SDK,透過手動執行迴圈執行組合式函式呼叫。

import { GoogleGenAI, Type } from "@google/genai";

// Configure the client
const ai = new GoogleGenAI({});

// Example Functions
function get_weather_forecast({ location }) {
  console.log(`Tool Call: get_weather_forecast(location=${location})`);
  // TODO: Make API call
  console.log("Tool Response: {'temperature': 25, 'unit': 'celsius'}");
  return { temperature: 25, unit: "celsius" };
}

function set_thermostat_temperature({ temperature }) {
  console.log(
    `Tool Call: set_thermostat_temperature(temperature=${temperature})`,
  );
  // TODO: Make API call
  console.log("Tool Response: {'status': 'success'}");
  return { status: "success" };
}

const toolFunctions = {
  get_weather_forecast,
  set_thermostat_temperature,
};

const tools = [
  {
    functionDeclarations: [
      {
        name: "get_weather_forecast",
        description:
          "Gets the current weather temperature for a given location.",
        parameters: {
          type: Type.OBJECT,
          properties: {
            location: {
              type: Type.STRING,
            },
          },
          required: ["location"],
        },
      },
      {
        name: "set_thermostat_temperature",
        description: "Sets the thermostat to a desired temperature.",
        parameters: {
          type: Type.OBJECT,
          properties: {
            temperature: {
              type: Type.NUMBER,
            },
          },
          required: ["temperature"],
        },
      },
    ],
  },
];

// Prompt for the model
let contents = [
  {
    role: "user",
    parts: [
      {
        text: "If it's warmer than 20°C in London, set the thermostat to 20°C, otherwise set it to 18°C.",
      },
    ],
  },
];

// Loop until the model has no more function calls to make
while (true) {
  const result = await ai.models.generateContent({
    model: "gemini-2.5-flash",
    contents,
    config: { tools },
  });

  if (result.functionCalls && result.functionCalls.length > 0) {
    const functionCall = result.functionCalls[0];

    const { name, args } = functionCall;

    if (!toolFunctions[name]) {
      throw new Error(`Unknown function call: ${name}`);
    }

    // Call the function and get the response.
    const toolResponse = toolFunctions[name](args);

    const functionResponsePart = {
      name: functionCall.name,
      response: {
        result: toolResponse,
      },
    };

    // Send the function response back to the model.
    contents.push({
      role: "model",
      parts: [
        {
          functionCall: functionCall,
        },
      ],
    });
    contents.push({
      role: "user",
      parts: [
        {
          functionResponse: functionResponsePart,
        },
      ],
    });
  } else {
    // No more function calls, break the loop.
    console.log(result.text);
    break;
  }
}

預期輸出內容

執行程式碼時,您會看到 SDK 協調函式呼叫。模型會先呼叫 get_weather_forecast,接收溫度,然後根據提示中的邏輯,使用正確的值呼叫 set_thermostat_temperature

Tool Call: get_weather_forecast(location=London)
Tool Response: {'temperature': 25, 'unit': 'celsius'}
Tool Call: set_thermostat_temperature(temperature=20)
Tool Response: {'status': 'success'}
OK. It's 25°C in London, so I've set the thermostat to 20°C.

組合式函式呼叫是 Live API 的原生功能。也就是說,Live API 可以處理函式呼叫,與 Python SDK 類似。

Python

# Light control schemas
turn_on_the_lights_schema = {'name': 'turn_on_the_lights'}
turn_off_the_lights_schema = {'name': 'turn_off_the_lights'}

prompt = """
  Hey, can you write run some python code to turn on the lights, wait 10s and then turn off the lights?
  """

tools = [
    {'code_execution': {}},
    {'function_declarations': [turn_on_the_lights_schema, turn_off_the_lights_schema]}
]

await run(prompt, tools=tools, modality="AUDIO")

JavaScript

// Light control schemas
const turnOnTheLightsSchema = { name: 'turn_on_the_lights' };
const turnOffTheLightsSchema = { name: 'turn_off_the_lights' };

const prompt = `
  Hey, can you write run some python code to turn on the lights, wait 10s and then turn off the lights?
`;

const tools = [
  { codeExecution: {} },
  { functionDeclarations: [turnOnTheLightsSchema, turnOffTheLightsSchema] }
];

await run(prompt, tools=tools, modality="AUDIO")

函式呼叫模式

您可透過 Gemini API 控制模型使用所提供工具 (函式宣告) 的方式。具體來說,您可以在 function_calling_config 中設定模式。

  • AUTO (Default):模型會根據提示和情境,決定要生成自然語言回應,還是建議呼叫函式。這是最彈性的模式,建議在大多數情況下使用。
  • ANY:模型一律會預測函式呼叫,並保證符合函式結構定義。如果未指定 allowed_function_names,模型可以從任何提供的函式宣告中選擇。 如果 allowed_function_names 是以清單形式提供,模型只能從該清單中的函式選擇。如果需要每則提示 (如適用) 的函式呼叫回應,請使用這個模式。
  • NONE禁止模型進行函式呼叫。這相當於傳送要求,但不含任何函式宣告。您可以使用這項功能暫時停用函式呼叫,不必移除工具定義。

Python

from google.genai import types

# Configure function calling mode
tool_config = types.ToolConfig(
    function_calling_config=types.FunctionCallingConfig(
        mode="ANY", allowed_function_names=["get_current_temperature"]
    )
)

# Create the generation config
config = types.GenerateContentConfig(
    tools=[tools],  # not defined here.
    tool_config=tool_config,
)

JavaScript

import { FunctionCallingConfigMode } from '@google/genai';

// Configure function calling mode
const toolConfig = {
  functionCallingConfig: {
    mode: FunctionCallingConfigMode.ANY,
    allowedFunctionNames: ['get_current_temperature']
  }
};

// Create the generation config
const config = {
  tools: tools, // not defined here.
  toolConfig: toolConfig,
};

自動呼叫函式 (僅限 Python)

使用 Python SDK 時,您可以直接提供 Python 函式做為工具。SDK 會自動將 Python 函式轉換為宣告,並為您處理函式呼叫執行作業和回應週期。Python SDK 接著會自動執行下列操作:

  1. 偵測模型傳回的函式呼叫回應。
  2. 在程式碼中呼叫對應的 Python 函式。
  3. 將函式回應傳回模型。
  4. 傳回模型的最終文字回覆。

如要使用這項功能,請定義函式並加入型別提示和說明字串,然後將函式本身 (而非 JSON 宣告) 做為工具傳遞:

Python

from google import genai
from google.genai import types

# Define the function with type hints and docstring
def get_current_temperature(location: str) -> dict:
    """Gets the current temperature for a given location.

    Args:
        location: The city and state, e.g. San Francisco, CA

    Returns:
        A dictionary containing the temperature and unit.
    """
    # ... (implementation) ...
    return {"temperature": 25, "unit": "Celsius"}

# Configure the client
client = genai.Client()
config = types.GenerateContentConfig(
    tools=[get_current_temperature]
)  # Pass the function itself

# Make the request
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="What's the temperature in Boston?",
    config=config,
)

print(response.text)  # The SDK handles the function call and returns the final text

您可以使用下列程式碼停用自動函式呼叫:

Python

config = types.GenerateContentConfig(
    tools=[get_current_temperature],
    automatic_function_calling=types.AutomaticFunctionCallingConfig(disable=True)
)

自動函式結構定義宣告

自動從 Python 函式擷取結構定義的功能,不適用於所有情況。舉例來說,如果您描述巢狀字典物件的欄位,這個函式就不會處理。這項 API 可說明下列任一類型:

Python

AllowedType = (int | float | bool | str | list['AllowedType'] | dict[str, AllowedType])

如要查看推論的結構定義,可以使用 from_callable 進行轉換:

Python

def multiply(a: float, b: float):
    """Returns a * b."""
    return a * b

fn_decl = types.FunctionDeclaration.from_callable(callable=multiply, client=client)

# to_json_dict() provides a clean JSON representation.
print(fn_decl.to_json_dict())

使用多種工具:結合原生工具和函式呼叫

您可以同時啟用多項工具,結合原生工具和函式呼叫。以下範例說明如何使用 Live API,在要求中啟用「透過 Google 搜尋進行基礎訓練」和「執行程式碼」這兩項工具。

Python

# Multiple tasks example - combining lights, code execution, and search
prompt = """
  Hey, I need you to do three things for me.

    1.  Turn on the lights.
    2.  Then compute the largest prime palindrome under 100000.
    3.  Then use Google Search to look up information about the largest earthquake in California the week of Dec 5 2024.

  Thanks!
  """

tools = [
    {'google_search': {}},
    {'code_execution': {}},
    {'function_declarations': [turn_on_the_lights_schema, turn_off_the_lights_schema]} # not defined here.
]

# Execute the prompt with specified tools in audio modality
await run(prompt, tools=tools, modality="AUDIO")

JavaScript

// Multiple tasks example - combining lights, code execution, and search
const prompt = `
  Hey, I need you to do three things for me.

    1.  Turn on the lights.
    2.  Then compute the largest prime palindrome under 100000.
    3.  Then use Google Search to look up information about the largest earthquake in California the week of Dec 5 2024.

  Thanks!
`;

const tools = [
  { googleSearch: {} },
  { codeExecution: {} },
  { functionDeclarations: [turnOnTheLightsSchema, turnOffTheLightsSchema] } // not defined here.
];

// Execute the prompt with specified tools in audio modality
await run(prompt, {tools: tools, modality: "AUDIO"});

Python 開發人員可以在 Live API Tool Use 筆記本中試用這項功能。

模型內容通訊協定 (MCP)

模型上下文協定 (MCP) 是一項開放標準,可將 AI 應用程式連結至外部工具和資料。MCP 提供通用協定,供模型存取內容,例如函式 (工具)、資料來源 (資源) 或預先定義的提示。

Gemini SDK 內建 MCP 支援功能,可減少樣板程式碼,並為 MCP 工具提供自動工具呼叫功能。模型產生 MCP 工具呼叫時,Python 和 JavaScript 用戶端 SDK 會自動執行 MCP 工具,並在後續要求中將回應傳回模型,持續這個迴圈,直到模型不再進行工具呼叫為止。

您可以在這裡找到如何搭配使用本機 MCP 伺服器、Gemini 和 mcp SDK 的範例。

Python

請確認您已在所選平台上安裝最新版 mcp SDK

pip install mcp
import os
import asyncio
from datetime import datetime
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from google import genai

client = genai.Client()

# Create server parameters for stdio connection
server_params = StdioServerParameters(
    command="npx",  # Executable
    args=["-y", "@philschmid/weather-mcp"],  # MCP Server
    env=None,  # Optional environment variables
)

async def run():
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            # Prompt to get the weather for the current day in London.
            prompt = f"What is the weather in London in {datetime.now().strftime('%Y-%m-%d')}?"

            # Initialize the connection between client and server
            await session.initialize()

            # Send request to the model with MCP function declarations
            response = await client.aio.models.generate_content(
                model="gemini-2.5-flash",
                contents=prompt,
                config=genai.types.GenerateContentConfig(
                    temperature=0,
                    tools=[session],  # uses the session, will automatically call the tool
                    # Uncomment if you **don't** want the SDK to automatically call the tool
                    # automatic_function_calling=genai.types.AutomaticFunctionCallingConfig(
                    #     disable=True
                    # ),
                ),
            )
            print(response.text)

# Start the asyncio event loop and run the main function
asyncio.run(run())

JavaScript

請確認您已在所選平台上安裝最新版 mcp SDK。

npm install @modelcontextprotocol/sdk
import { GoogleGenAI, FunctionCallingConfigMode , mcpToTool} from '@google/genai';
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";

// Create server parameters for stdio connection
const serverParams = new StdioClientTransport({
  command: "npx", // Executable
  args: ["-y", "@philschmid/weather-mcp"] // MCP Server
});

const client = new Client(
  {
    name: "example-client",
    version: "1.0.0"
  }
);

// Configure the client
const ai = new GoogleGenAI({});

// Initialize the connection between client and server
await client.connect(serverParams);

// Send request to the model with MCP tools
const response = await ai.models.generateContent({
  model: "gemini-2.5-flash",
  contents: `What is the weather in London in ${new Date().toLocaleDateString()}?`,
  config: {
    tools: [mcpToTool(client)],  // uses the session, will automatically call the tool
    // Uncomment if you **don't** want the sdk to automatically call the tool
    // automaticFunctionCalling: {
    //   disable: true,
    // },
  },
});
console.log(response.text)

// Close the connection
await client.close();

內建 MCP 支援的限制

SDK 內建的 MCP 支援是實驗性功能,有下列限制:

  • 僅支援工具,不支援資源或提示
  • 適用於 Python 和 JavaScript/TypeScript SDK。
  • 後續版本可能會出現重大變更。

如果這些限制會影響您建構的內容,您隨時可以手動整合 MCP 伺服器。

支援的模型

本節列出模型及其函式呼叫功能。不含實驗模型。如需完整的功能總覽,請參閱模型總覽頁面。

模型 函式呼叫 平行函式呼叫 組合式函式呼叫
Gemini 2.5 Pro ✔️ ✔️ ✔️
Gemini 2.5 Flash ✔️ ✔️ ✔️
Gemini 2.5 Flash-Lite ✔️ ✔️ ✔️
Gemini 2.0 Flash ✔️ ✔️ ✔️
Gemini 2.0 Flash-Lite X X X

最佳做法

  • 函式和參數說明:說明內容務必清楚明確,模型會根據這些資訊選擇正確的函式,並提供適當的引數。
  • 命名:使用描述性的函式名稱 (不含空格、句號或破折號)。
  • 嚴格型別:為參數使用特定型別 (整數、字串、列舉),減少錯誤。如果參數的有效值有限,請使用列舉。
  • 工具選取:模型可以使用任意數量的工具,但如果提供過多工具,選取錯誤或次佳工具的風險就會增加。為獲得最佳成效,請盡量只提供與情境或工作相關的工具,最好將有效工具組維持在最多 10 到 20 個。如果工具總數較多,請考慮根據對話內容動態選取工具。
  • 提示工程:
    • 提供背景資訊:告知模型其角色 (例如「你是實用的天氣小幫手。」)。
    • 提供指示:指定函式的使用方式和時機 (例如 「請勿猜測日期,預測時一律使用未來的日期。」)。
    • 鼓勵釐清:指示模型視需要提出釐清問題。
  • 溫度:使用低溫 (例如 0),確保函式呼叫更具決定性且可靠。
  • 驗證:如果函式呼叫會造成重大影響 (例如下單),請先向使用者驗證呼叫,再執行呼叫。
  • 錯誤處理:在函式中導入健全的錯誤處理機制,妥善處理非預期的輸入內容或 API 失敗情形。回傳資訊豐富的錯誤訊息,供模型用來生成對使用者有幫助的回覆。
  • 安全性:呼叫外部 API 時,請注意安全性。使用適當的驗證和授權機制。避免在函式呼叫中公開機密資料。
  • 權杖限制:函式說明和參數會計入輸入權杖限制。如果達到權杖上限,請考慮限制函式數量或說明長度,並將複雜工作分解為較小、更專注的函式集。

注意事項和限制