Skip to content

Tools (Function calling)

Use tools to let the model call your backend functions. The model decides when to call a tool, streams arguments if needed, and you post the tool result back as a tool message linked via tool_call_id. Then you ask the model to continue with the new context.

Endpoint: POST https://api.aifoundryhub.com/v1/chat/completions


  1. Define tools — Provide one or more functions with a JSON‑Schema parameters object.
  2. Ask the model — Send messages and tools with tool_choice: "auto" (or force a specific tool).
  3. Model returns a tool call — Read choices[0].message.tool_calls[]{ id, type: 'function', function: { name, arguments } }.
  4. Run your function — Execute the real function in your code; capture the result string/JSON.
  5. Return a tool message — Append { role: 'tool', content: <result>, tool_call_id: <id> } to messages.
  6. Continue the chat — Call the endpoint again; the model will use the tool output to finish.

import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.AI_FOUNDRY_HUB_API_KEY,
baseURL: "https://api.aifoundryhub.com/v1",
});
const tools = [
{
type: "function",
function: {
name: "get_current_weather",
description: "Get current weather for a city",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City or location name" },
unit: { type: "string", enum: ["celsius", "fahrenheit"] }
},
required: ["location", "unit"]
}
}
}
];
const messages = [
{ role: "system", content: "You can call functions to help the user." },
{ role: "user", content: "Weather in Moscow in celsius?" }
];
// 1) Ask model; it may return a tool call
let run = await client.chat.completions.create({
model: "gpt-4.1",
messages,
tools,
tool_choice: "auto",
});
const msg = run.choices[0].message;
if (msg.tool_calls?.length) {
// Always include the assistant message that requested tool(s)
messages.push({ role: "assistant", content: msg.content ?? "", tool_calls: msg.tool_calls });
for (const call of msg.tool_calls) {
const args = JSON.parse(call.function.arguments || "{}");
// 2) Execute your function — mock implementation here
const result = `It is 20°C and clear in ${args.location}.`;
// 3) Return tool result linked by tool_call_id
messages.push({ role: "tool", tool_call_id: call.id, content: result });
}
// 4) Continue the chat with tool outputs
run = await client.chat.completions.create({
model: "gpt-4.1",
messages,
});
}
console.log(run.choices[0].message.content?.[0]?.text ?? run.choices[0].message.content);

A chat.completion object. When tools are used, the assistant message contains tool_calls[]. After you send role: "tool" messages and call again, the assistant will respond with natural language.

{
"choices": [
{
"message": {
"role": "assistant",
"content": [],
"tool_calls": [
{
"index": 0,
"id": "call_abc123",
"type": "function",
"function": { "name": "get_current_weather", "arguments": "{\"location\":\"Moscow\",\"unit\":\"celsius\"}" }
}
]
}
}
]
}

Note on streaming: when stream: true, function arguments may arrive incrementally in chat.completion.chunk deltas under choices[].delta.tool_calls[].function.arguments. Concatenate the chunks to build the full JSON, then parse.