ChatOpenAI
Use different OpenAI's text generation models by following these instructions:.
Installation
Install peer dependencies:
npm install openai
Add Environment Variables
OPENAI_API_KEY = "YOUR_SAMPLE_API_KEY";
/* You can get one from - https://platform.openai.com/api-keys */
Copy the code
Add the following code to your utils/chatOpenAi.ts
file:
import OpenAI from "openai";
import { ClientOptions } from "openai";
import { RequestOptions } from "openai/core.mjs";
import {
ChatCompletion,
ChatCompletionMessageParam,
ChatModel,
} from "openai/resources/index.mjs";
import { ChatCompletionChunk } from "openai/src/resources/index.js";
import { Stream } from "openai/streaming.mjs";
import fs from "fs";
type ChatOpenAICompletionProps = {
stream?: boolean;
max_tokens?: number;
};
interface ChatOpenAiArgs extends ClientOptions {
model:
| ChatModel
| "text-moderation-latest"
| "text-moderation-stable"
| "text-moderation-007"
| "gpt-4o-mini";
}
interface OpenAICallProps {
prompt: string;
context?: string;
systemInstruction?: string;
outputFormat?: string;
chatHistory?: Record<"assistant" | "user", string>[];
temperature?: number;
image?: { path: string; type: "local" | "remote" }[];
options?: ChatOpenAICompletionProps;
}
interface ChatCompletionOutput {
prompt_tokens: number;
completion_tokens: number;
output: string;
}
interface OpenAiThreatDetectionProps {
input: string;
options?: RequestOptions;
}
export class ChatOpenAI {
private client: OpenAI;
private model:
| ChatModel
| "text-moderation-latest"
| "text-moderation-stable"
| "text-moderation-007"
| "gpt-4o-mini";
constructor(props: ChatOpenAiArgs) {
const { apiKey, model } = props;
if (!apiKey || apiKey.length === 0) {
throw new Error("No API key provided for OpenAI.");
}
this.client = new OpenAI({ apiKey });
this.model = model;
}
async chat(
props: OpenAICallProps
): Promise<
Stream<ChatCompletionChunk> | ChatCompletion | ChatCompletionOutput
> {
const {
prompt,
context,
outputFormat,
options,
chatHistory,
temperature,
systemInstruction,
image,
} = props;
let userMessages: Array<ChatCompletionMessageParam> = [];
if (systemInstruction) {
userMessages.push({ role: "system", content: systemInstruction });
}
if (chatHistory && chatHistory.length > 0) {
chatHistory.forEach((chat) => {
userMessages.push({ role: "user", content: chat.user });
userMessages.push({ role: "assistant", content: chat.assistant });
});
}
const content = this.createContext(prompt, context, outputFormat, image);
userMessages.push({
role: "user",
content: image && image.length > 0 ? JSON.parse(content) : content,
});
const chatCompletion = await this.client.chat.completions.create({
messages: userMessages,
model: this.model,
temperature: temperature ?? 0.5,
...options,
});
if (!options?.stream) {
return {
// @ts-ignore
prompt_tokens: chatCompletion.usage?.prompt_tokens ?? 0,
// @ts-ignore
completion_tokens: chatCompletion.usage?.completion_tokens ?? 0,
output:
// @ts-ignore
chatCompletion.choices[0].message?.content
.replace("```json\n", "")
.replace("\n```", "") ?? "",
};
}
return chatCompletion;
}
async detectThreat(
props: OpenAiThreatDetectionProps
): Promise<OpenAI.Moderations.ModerationCreateResponse> {
const { input, options } = props;
const moderation = await this.client.moderations.create({
input: input,
model: this.model,
...options,
});
return moderation;
}
private createContext(
prompt: string,
context?: string,
outputFormat?: string,
file?: { path: string; type: "local" | "remote" }[]
): string {
let content = prompt;
if (context) {
content += `\ncontext:\n${context}`;
}
if (outputFormat) {
content += `\noutput format:\n${outputFormat}`;
}
if (file && file.length > 0) {
let finalContent = [
{ type: "text", text: content },
...file.map((f) => ({
type: "image_url",
image_url: {
url: f.type === "local" ? this.encodeImage(f.path) : f.path,
},
})),
];
return JSON.stringify(finalContent);
}
return content;
}
private encodeImage = (imagePath: string) => {
const imageBuffer = fs.readFileSync(imagePath);
return `data:image/jpeg;base64,${imageBuffer.toString("base64")}`;
};
}
Usage
Initialize client
Initialize the ChatOpenAI client.
import { ChatOpenAI } from "@/utils/chatOpenAi";
const chatOpenAI = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-3.5-turbo-0125",
});
Simple Chat
Sample chat with a simple single prompt.
const data = await chatOpenAI.chat({
prompt: "What is YCombinator? Respond in one liner!",
});
console.log(data);
/**
{
"prompt_tokens": 19,
"completion_tokens": 12,
"output": "YCombinator is a seed money startup accelerator program."
}
**/
Multi Conversational chat
Sample chats involving addition of chat history.
const dataWithHistory = await chatOpenAI.chat({
prompt: "Who I am?",
chatHistory: [
{ user: "Hello", assistant: "Hi there! How can I help you today?" },
{ user: "My name is John Doe, and I am a AI developer!", assistant: "Sure, I will remember that!" },
],
});
console.log(dataWithHistory);
/**
{
"prompt_tokens": 58,
"completion_tokens": 17,
"output": "You are John Doe, an AI developer. How can I assist you further today?"
}
**/
Defining Output Format
Example of defining output format. Though it's not 100% accurate, LLM can hallucinate sometime.
const dataWithFormat = await chatOpenAI.chat({
prompt: "Who I am?",
chatHistory: [
{ user: "Hello", assistant: "Hi there! How can I help you today?" },
{ user: "My name is John Doe, and I am a AI developer!", assistant: "Sure, I will remember that!" },
],
outputFormat: `{"name: "", occupation: ""}`,
});
console.log(dataWithFormat);
/**
{
"prompt_tokens": 69,
"completion_tokens": 14,
"output": "{\"name\": \"John Doe\", \"occupation\": \"AI developer\"}"
}
**/
Defining Context and other optional parameters
const dataWithContext = await chatOpenAI.chat({
prompt: "Who I am?",
chatHistory: [
{ user: "Hello", assistant: "Hi there! How can I help you today?" },
{ user: "My name is John Doe, and I am a AI developer!", assistant: "Sure, I will remember that!" },
],
context: "John Doe is a AI developer, specialized in chatbot development and RAG research.",
systemInstruction: "You're a helpful assistant and your name is AgentGenesis!",
outputFormat: `{"name: "", occupation: "", speciality: "", your_name : ""}`,
temperature: 0.6,
options: {
max_tokens: 100,
stream: false,
},
});
console.log(dataWithContext);
/**
{
"prompt_tokens": 112,
"completion_tokens": 41,
"output": "{\n \"name\": \"John Doe\",\n \"occupation\": \"AI developer\",\n \"speciality\": \"chatbot development and RAG research\",\n \"your_name\": \"AgentGenesis\"\n}"
}
**/
Multimodal Chat
ChatOpenAi supports multimodal inputs, allowing you to combine text with image files. Check out the supported image types on the Openai Platform.
Example: Using Public URLs and Local paths
The following examples demonstrate how to send single and multiple files using public URLs and Local paths.
const dataWithImage = await chatOpenAI.chat({
prompt: "what the 2nd image is depicting...?",
image: [
{
path: "./public/busy.jpg",
type: "local",
},
{
path: "https://2.img-dpreview.com/files/p/E~C1000x0S4000x4000T1200x1200~articles/3925134721/0266554465.jpeg",
type: "remote",
},
],
temperature: 0.6,
});
Streaming text
Sample example of creating and handling stream responses.
const dataWithStreaming = await chatOpenAI.chat({
prompt: "Who I am?",
chatHistory: [
{ user: "Hello", assistant: "Hi there! How can I help you today?" },
{ user: "My name is John Doe, and I am a AI developer!", assistant: "Sure, I will remember that!" },
],
context: "John Doe is a AI developer, specialized in chatbot development and RAG research.",
systemInstruction: "You're a helpful assistant and your name is AgentGenesis!",
temperature: 0.6,
options: {
max_tokens: 100,
stream: true,
},
});
// @ts-ignore
for await (const chunk of dataWithStreaming) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
/**
You are John Doe, an AI developer specialized in chatbot development and RAG research. How can I assist you further, John?
**/
Threat Detection
Detecting input prompts that can violate OpenAI's policy & rules.
const chatOpenAI = new ChatOpenAI({
apiKey: process.env.OPENAI_API_KEY,
model: "text-moderation-stable",
});
const data = await chatOpenAI.detectThreat({
input: "sample_text",
});
Props
chat
Prop | Type | Description | Default |
---|---|---|---|
prompt | string | Prompt provided by user. | "" |
context | string? | Additional context user wants to provide. | "" |
outputFormat | string? | Particular format in which the user wants their output. | "" |
chatHistory | optional | Conversational history. | "" |
temperature | number? | Randomness of the LLM. | 0.5 |
systemInstruction | string? | Any specific instruction to the system user wants to feed. | "" |
image | optional | Image path | [] |
options | ChatOpenAICompletionProps? | Additional args as per OpenAI docs. |
detectThreat
Prop | Type | Description | Default |
---|---|---|---|
input | string | Text prompt provided by user. | "" |
options | RequestOptions? | Additional args as per OpenAI docs. | "" |
Credits
This component is built on top of Open AI's node sdk