Getting Started with TypeScript and OpenAI Tools
In this tutorial, we'll build a TypeScript application that uses the OpenAI Completions SDK with function calling capabilities. We'll also show how to connect external tools using AgentRPC to create a powerful AI assistant.
Prerequisites
Before we start, make sure you have:
- Node.js 16 or higher
- npm or yarn
- A basic understanding of TypeScript
- An OpenAI API key (get one at platform.openai.com)
Setting Up Your TypeScript Project
Let's start by creating a new TypeScript project and installing the necessary dependencies:
mkdir openai-typescript-app
cd openai-typescript-app
npm init -y
npm install openai dotenv typescript @types/node
npm install -D tsx
Now, let's create a tsconfig.json
file for TypeScript configuration:
npx tsc --init
Create a .env
file in your project root to store your API keys:
OPENAI_API_KEY=your_openai_api_key
Basic OpenAI Chat Completions with TypeScript
Let's start with a simple example using the OpenAI Completions SDK in TypeScript:
// basic-completion.ts
import { OpenAI } from "openai";
import dotenv from "dotenv";
dotenv.config();
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
async function main() {
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "system",
content: "You are a helpful assistant.",
},
{
role: "user",
content: "Tell me about TypeScript's advantages over JavaScript.",
},
],
});
console.log(completion.choices[0]?.message.content);
}
main().catch(console.error);
Run this example with:
npx tsx basic-completion.ts
Using OpenAI Function Calling
OpenAI's Chat Completions API supports function calling, which allows models to invoke functions you define. Let's implement a simple weather function:
// function-calling.ts
import { OpenAI } from "openai";
import dotenv from "dotenv";
dotenv.config();
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// Mock weather function
function getWeather(location: string) {
// In a real app, you would call an actual weather API
return {
location,
temperature: "72°F",
condition: "Sunny",
humidity: "45%",
};
}
async function main() {
// Define the function the model can call
const tools = [
{
type: "function",
function: {
name: "getWeather",
description: "Get the current weather for a location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g., San Francisco, CA",
},
},
required: ["location"],
},
},
},
];
// Create a completion with the tools
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "system",
content: "You are a helpful weather assistant.",
},
{
role: "user",
content: "What's the weather like in San Francisco today?",
},
],
tools,
});
const message = completion.choices[0]?.message;
console.log("Initial response:", message.content);
// Handle tool calls if the LLM decides to use them
if (message?.tool_calls) {
for (const toolCall of message.tool_calls) {
if (toolCall.function.name === "getWeather") {
const args = JSON.parse(toolCall.function.arguments);
const weatherData = getWeather(args.location);
// Use the result to generate a final response
const finalResponse = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "system",
content: "You are a helpful weather assistant.",
},
{
role: "user",
content: "What's the weather like in San Francisco today?",
},
message,
{
role: "tool",
tool_call_id: toolCall.id,
name: toolCall.function.name,
content: JSON.stringify(weatherData),
},
],
});
console.log(
"Final response:",
finalResponse.choices[0]?.message.content,
);
}
}
}
}
main().catch(console.error);
Run this with:
npx tsx function-calling.ts
Enhancing with AgentRPC: Connecting External Tools
For more complex applications, you might need to connect to external tools or services. Let's enhance our example using AgentRPC to register and manage external tool connections:
1. Install AgentRPC
First, you'll need to install AgentRPC and create an account:
npm install agentrpc
Get an API key from the AgentRPC Dashboard and add it to your .env
file:
OPENAI_API_KEY=your_openai_api_key
AGENTRPC_API_SECRET=your_agentrpc_api_key
2. Creating an External Weather Tool
Let's create a weather tool service using AgentRPC:
// weather-tool.ts
import { AgentRPC } from "agentrpc";
import { z } from "zod";
import dotenv from "dotenv";
dotenv.config();
const rpc = new AgentRPC({
apiSecret: process.env.AGENTRPC_API_SECRET!,
});
// Register the weather tool with schema validation using Zod
rpc.register({
name: "getWeather",
description: "Return weather information at a given location",
schema: z.object({ location: z.string() }),
handler: async ({ location }) => {
// In a real app, you would call a weather API here
return {
location: location,
temperature: "72°F",
condition: "Sunny",
humidity: "45%",
windSpeed: "5 mph",
};
},
});
// Start the RPC server
rpc.listen();
console.log("Weather tool service is running!");
3. Connecting OpenAI Completions to AgentRPC
Now, let's create our main application that connects the OpenAI Completions SDK to our AgentRPC tool:
// agent-app.ts
import { OpenAI } from "openai";
import { AgentRPC } from "agentrpc";
import dotenv from "dotenv";
dotenv.config();
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const rpc = new AgentRPC({ apiSecret: process.env.AGENTRPC_API_SECRET });
async function main() {
// Get AgentRPC tools in OpenAI-compatible format
const tools = await rpc.OpenAI.getTools();
// Create a completion with the tools
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "system",
content:
"You are a helpful weather assistant. Use the getWeather tool when asked about weather conditions.",
},
{
role: "user",
content: "What's the weather like in San Francisco today?",
},
],
tools,
});
const message = completion.choices[0]?.message;
console.log("Initial response:", message.content);
// Handle tool calls if the LLM decides to use them
if (message?.tool_calls) {
for (const toolCall of message.tool_calls) {
console.log("Calling external tool:", toolCall.function.name);
// Execute the tool and get the result through AgentRPC
const result = await rpc.OpenAI.executeTool(toolCall);
console.log("Tool result:", result);
// Generate a final response with the tool result
const finalResponse = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "system",
content: "You are a helpful weather assistant.",
},
{
role: "user",
content: "What's the weather like in San Francisco today?",
},
message,
{
role: "tool",
tool_call_id: toolCall.id,
name: toolCall.function.name,
content: JSON.stringify(result),
},
],
});
console.log("Final response:", finalResponse.choices[0]?.message.content);
}
}
}
main().catch(console.error);
Running Your TypeScript OpenAI Application
To run the complete example:
- First, start your weather tool service:
npx tsx weather-tool.ts
- In a separate terminal, run your main application:
npx tsx agent-app.ts
The OpenAI model will recognize the weather request, call the tool through AgentRPC, and formulate a helpful response using the tool result.