OpenAPI meets OpenAI: AI-Driven Dynamic Mock APIs

In a fast-paced development environment, we often encounter various types of blocking dependencies within or between teams. One such dependency is getting the backend APIs ready to be integrated into consumer interfaces and flows.

Making backend APIs available on time becomes challenging, especially when both the frontend and backend teams are working simultaneously. So it becomes crucial to resolve the API dependency issue quickly to avoid unnecessary delays.


Traditional Solution

Traditionally, teams create dummy APIs to serve placeholder responses once API specifications are finalized.

However, this approach has its drawbacks. Firstly, implementing these dummy APIs adds unnecessary development effort and time. Secondly, it's difficult to create multiple placeholder responses to cover all possible scenarios, so teams often implement a single basic response. Finally, any changes in API specifications during development require extra effort to update the placeholder responses accordingly.

AI-Driven Solution

To address these challenges, we developed a solution: an intelligent code that dynamically generates placeholder responses using AI. This code serves as a link between the frontend and the non-existing backend APIs 😀, allowing backend developers to focus on the actual implementation.

The solution uses OpenAPI specifications to identify the required APIs and the expected responses. And then marrying it with OpenAI chat API, to generate real-time placeholder responses.

Unlike traditional approaches that provide static responses, this solution generates dynamic responses. Hence, making it easy to serve placeholder responses for paginated APIs as well.

Implementation details

The implementation involves creating a wildcard route to capture any unimplemented request, extracting relevant details from predefined OpenAPI specifications, and generating prompts for AI-driven responses. The process is managed by a service that interacts with an AI model, ensuring that the responses align with the specification.
For this blog, I will provide implementation details using express.js. But we can use the same concept to achieve it in any programming language or framework.

Step 1: OpenAPI specification

Prepare the OpenAPI specifications for the required APIs. I have used JSON format for OpenAPI specs, but the YAML format can also be used.

Step 2: WildCard route

Add a wildcard GET route to the routes file. This wildcard route will capture any unimplemented/undefined APIs in the codebase. For simplicity, I have created a wildcard route for GET requests, but in the real world, we have to add wildcard routes for other HTTP methods too.

The wildcard route invokes a placeholderResponse service (implementation details at Step 4) for the response.

app.get("/*", async function (req, res) {
  const resp = await placeholderResponse(req.originalUrl, "GET");
  res.status(200).send(resp);
});


Step 3: Prompt Engineering

Prepare the OpenAI prompts instructing it to generate the required response. I have created a sample prompt.

const userPrompt = (fileContent, route) =>
`${fileContent} \n`+
`Generate a mock response for the "${route}" endpoint using the above provided OpenAPI spec.`;

const systemPrompt =
`You are an expert in reading OpenAPI specs and generating mock data and responses for a given API. Your job is not to validate the request route and its parameters. You are not expected to return an error. Do not include any informative text or suggestions, statements like "Understood, ", "Certainly,", "In this mock response" etc. Reply with only a JSON response. \n`+
`You will be given an OpenAPI specification to understand API response format. Try to generate human-readable and random values in a mock response. Do not repeat values in response. \n`+
`If an attribute is an array, include 5 to 10 elements for it. \n` +
`If you have difficulty generating a response, please do not makeup a response yourself, instead return "Something went wrong" \n`;

module.exports = { userPrompt, systemPrompt };


Step 4: Service Implementation

Finally, implement the placeholder response generation service. The key responsibilities of the service are:

  1. Generate system and user prompts using - OpenAI specification, request method, and URL
  2. OpenAI chat completion API invocation to generate a dummy placeholder response.
  3. Return the response

const fs = require("fs").promises;
const prompt = require("./PROMPT");
const openAPISpecFilePath = "/PATH/WHERE/OPENAPI/SPEC/SAVED";
const localDataStoreObject = {};

async function placeholderResponse(originalUrl, method) {
const responseIdentifier = "default";
const fileContent = await fs.readFile(openAPISpecFilePath, "utf8");

const userPrompt = prompt.userPrompt(fileContent, method + " " + originalUrl);
const systemPrompt = prompt.systemPrompt;

// Implement OpenAI completion API using the above prompts to generate a dummy response

return dummyResponse;
}

module.exports = { placeholderResponse };


Finally: Optimizations

We know OpenAI API calls are a bit heavy on the wallet and take a few seconds to respond. That’s why using optimizations is always good. We can implement the following optimizations to minimize OpenAI model interactions:

  1. Caching: Implement the caching strategy for placeholder responses to avoid OpenAI every single time a new request is received.
  2. Tokens: OpenAI charges are based on the number of tokens used in prompt and response. To reduce the prompt token size, break the OpenAI specifications into smaller and more related routes. And send the relevant OpenAI spec to OpenAI, instead of one big file.
  3. Model: We have used the OpenAI GPT-3.5-turbo model to keep the cost low. But, other cheaper LLM models or services can also be used.

We have seen significant time savings by using this solution. It saves us anywhere from 1-3 hours per route. So, if the project has 10 routes, we are easily saving 10-30 hours. Additionally, there are noteworthy time savings for the frontend team, as the approach helps mitigate post-integration issues, allowing them to develop their components with more realistic data.

We have used this solution for multiple client projects. And we love it.