Using ChatGPT as a backend

Dmitry Mosquid
4 min readMay 26


The Overlord

Probably the last backend you will ever build *

I want to start with a small disclaimer: in this article, I am sharing ideas and tools that some beginners may find helpful. In the real-world app, you will want to use tools like Langchain or/and Pinecone. But here we keep it simple.

One backend to rule them all

There are thousands of creative applications using ChatGPT under the hood and millions more on the way. Also, it is surprisingly easy to use. Like, really simple. However, the first steps can be a little counterintuitive.

I’ve helped a number of software engineers with their ChatGPT projects, and all of them initially had wrong assumptions and tended to overestimate the complexity. Let’s break it down into building blocks and talk about each one.

Getting Started

Completion VS Chat Completion?

OpenAI API features two pretty similar API endpoints: /completion and /chat/completion. Chat completion takes a series of messages as input, while completion takes a single string as input. The OpenAI API provides several message roles that can be used to provide context and additional instructions to the model. In this article, I’ll only focus on the /chat/completion method, as I find it a lot more useful.

Message Roles

In addition to the message content, you need to provide a role. There are three roles available:

  1. System: used to calibrate the behavior of the assistant.
  2. Assistant: the role assigned to the chatbot responses. You can use the “assistant” role to “continue” a conversation or make the chat experience more stateful.
  3. User: the role assigned to messages sent by a user.

The typical conversation begins with a “system” message to set a tone and perhaps a response format. Then it’s followed by the “user” message. Here as example

const messages = [{
role: "assistant",
content: 'As an AI model, you should respond with binary answers that contain only the words "yes" and "no".'
}, {
role: "user",
content: "Can a person fly without the use of additional tools?"


Temperature is a parameter that allows to set the desired level of entropy or randomness. The documentation is a bit controversial about the range. Some guides claim that it can be between 0 and 1 while the official documentation states that the temperature range is from 0 to 2. If you want the assistant to be more creative, you should set a higher temperature value. For a more strict and focused mode, you should choose 0.


Tokens are the basic unit that OpenAI GPT models (including ChatGPT) use to compute the length of a text.

You can specify the number of tokens you want the model to use when generating the completion. Keep in mind that the parameter max_tokens takes into account your input as well. To learn more about the tokens you can use this tool.

GPT Overlord

I built a small NPM package called gpt-overlord that aims to provide a simple abstraction layer on top of the OpenAI node client. With this package, you can:

  • Specify the Response schema.
  • Provide additional instructions to the AI model.
  • Provide additional context or chat history.

Let’s build a simple game where players have to name the capital of a country. We will build an API endpoint that does the following:

  • Validates the player’s answer.
  • Returns the correct answer if the player gave the wrong answer.

First, you need to get your OpenAI API key. Please note, I tried the free tier and it didn’t work at all for me. So consider upgrading to the paid plan.

Second, let’s add gpt-overlord to your project:

yarn add gpt-overlord

Now we need to create an instance of the “overlord”:

const overlord = new GPTOverlord({
apiKey: process.env.OPENAI_API_KEY,
model: "gpt-3.5-turbo",
temperature: 0,
// the format of the chat completion
schema: {
isCorrect: "true | false",
correctAnswer: "..",
setupMessages: [
// describing the rules of our game
"We are playing a game. I give you the name of the country and its capital. You tell me if I am right or wrong.",
role: "system",

Finally, we can try to “talk” to our assistant:

const country = 'Span';
const capital = 'Barcelona';
const response = await overlord.prompt(
`{country: ${country}; capital: ${capital}}`

// response: {"isCorrect":"false","correctAnswer":"Madrid"}

And we are done. Just hook it up to your API endpoint and you should be good to go. Remember I told you it is simple?

P.S. For the source code and issue, please visit the GitHub repository.



Dmitry Mosquid