Keeping Track of your OpenAI API Spend using Python

A lightweight Python tool that helps you monitor and estimate your API usage costs by calculating tokens and providing a detailed cost breakdown for each request.

Many ELM API users have asked how to estimate their OpenAI API usage costs.

One of our API users at the University of Edinburgh has developed a small open-source Python module that can assist with this.

Overview

The tool provides a convenient way to estimate API costs for a given request by calculating the number of tokens used and multiplying by the relevant OpenAI pricing.

The code has been made available under the permissive MIT Licence, meaning other ELM API users are free to use and adapt it for their own research.

Important:
This tool was developed by a University of Edinburgh researcher and is not an official ELM or OpenAI product.
Please review and test the code carefully before using it in your own projects.

GitHub Repository

The Python module is available here:
https://github.com/gwr3n/genai_pricing

Example Usage

Once you’ve downloaded the genai_pricing.py module (or cloned the GitHub repository), you can use it in your Python project as follows:

from genai_pricing import openai_prompt_cost

# Example usage
estimate = openai_prompt_cost(model, prompt, answer, resp)

print("Cost (USD):", estimate["total_cost"])
print("Details:", estimate)


This will return a dictionary containing an estimated breakdown of prompt, completion, and total costs.

Notes and Caveats

Pricing data may change over time — check OpenAI’s Pricing page for the latest rates.

The code is not yet available via PyPI. To use it, download or clone it directly from GitHub.

You may adapt it to your own needs or integrate it with your ELM workflows if you are comfortable working in Python.

Acknowledgements

This tool and example were developed by Dr Roberto Rossi, University of Edinburgh, and shared with permission for other ELM API users.