Creating an ELM chatbot with Python and Flask

Objective: to create a simple Flask app for hosting an ELM chatbot online

 

Requirements

This example requires Python 3.7+ as well as OpenAI Python API library (https://pypi.org/project/openai/), python-dotenv (https://pypi.org/project/python-dotenv/) and Flask (https://pypi.org/project/Flask/). For local development, Flask-Cors (https://pypi.org/project/Flask-Cors/) may also be necessary. These can be installed using pip:

pip install

pip install openai==1.26.0                   # this is the version tested with the scripts below

pip install python-dotenv

pip install Flask

pip install Flask-Cors                        # only required for local testing

You will also need access to a private ELM API key.

 

Using the private ELM API key

This example requires access to an ELM API key and this key must be kept private. One method of doing this during development is to import the key from a local environment variable. This can be done using a .env file saved in your project root but added to .gitignore. The .env file should contain:

ELM API key

OPENAI_API_KEY=<ENTER-YOUR-API-KEY-HERE>

OPENAI_BASE_URL=https://elm-proxy.edina.ac.uk/v1

Which also includes the OPENAI_BASE_URL path that the openai python library will use to point to ELM.

Your Python script can now access these environment variables, without exposing their values in the script itself, by using dotenv to load it into your script:

Adding dotenv into your script

from openai import OpenAI

import os

from flask import Flask, request

from flask_cors import CORS # remove in production

client = OpenAI(

     api_key=os.environ.get("OPENAI_API_KEY"),

)

 

A simple Flask app

Flask (see details and tutorials here: https://flask.palletsprojects.com/en/3.0.x/) is a web framework that enables the production of web apps in Python. This example demonstrates a very simple web app that takes a user prompt, sends it to ELM and returns ELM's response. For simplicity, the app only uses URL query params and returns a simple JSON string - no HTML pages required.

The core method in the flask app is as follows:

def create_app():

      app = Flask(__name__)

      CORS(app) # remove in production

      @app.route("/")

      def empty():

            return {"response": "this is an empty route"}

 

      @app.route("/chat", methods=["GET"])

      def chat():

            ...

            'code to communicate with ELM'

            ...

      return app

This creates the Flask app and defines the routes that you want your web app to respond to. In this case, two routes are defined: / which will simply return a message stating that this is an empty route and the /chat route where communication with ELM happens.

In this simple example, the /chat route accepts two query parameters, user_id and prompt. The chat() method will get those query parameters from the URL sent to it, check that they are not empty or missing, send the prompt to ELM and then return ELM's response.

 

Two query parameters

@app.route("/chat", methods=["GET"])

      def chat():

            try:

                  user_id = request.args.get('user_id', default='none')

                  prompt = request.args.get('prompt', default='')

 

                  if (user_id == 'none'):

                        return {

                        "response": "ERROR",

                        "message": 'missing user_id'

                        }, 400

                  if (prompt == ''):

                        return {

                        "response": "ERROR",

                        "message": 'you did not ask anything'

                        }, 400

                  response = AskELM(prompt)

                  return {

                        'sender':'chatbot',

                        'text':str(response)

                    }

            except Exception as e:

                  return {

                        "response": "ERROR",

                        "message": str(e)

                  }, 500

Communication with ELM is performed in the AskELM() method:

def AskELM(input_text):

      completion = client.chat.completions.create(

            messages=[

                  { "role":"system",

                    "content":"You are a teaching assistant that only responds with reflective questioning."

                  },

            {

                        "role": "user",

                        "content": str(input_text),

               }

            ],

            model="gpt-4-turbo",

      )

      return completion.choices[0].message.content

Where we have provided a system role that defines the type of response that ELM should provide - in this case, a simple example system role is provided, but in production use it would be further developed and tuned to suit your use case.

 

Running your app locally

Flask includes a development server that enables testing of your app locally, but should not be used in production. After installing all necessary requirements, from the command line, ensure you are in the root directory of your app and type:

Flask's development server

flask --app chatbox-flask run

 

With the --app flag providing the name of the python script that contains your app (here called chatboxflask.py). If run successfully, the output will provide a local address at which you can communicate with your app. In this example it is http://127.0.0.1:5000 (but replace with the output from your app if different). 

Enter this into your browser's address bar along with the appropriate route for accessing your chatbot and the necessary URL query parameters, e.g.:

http://127.0.0.1:5000/chat? user_id=abc123&prompt=how+do+i+solve+the+equation+2x+%3D+1

Where, in this example, the user_id is abc123 and the prompt is how do i solve the equation 2x = 1, appropriately encoded.

After submitting this in the address bar, you should receive a response from ELM via your locally hosted app. This response will be a JSON string and should look something like:

 

flask query and response example

Hosting online

This example describes how to run your app locally using the included development server in Flask. This development server should not be used in production. Instead, a production WSGI server should be used, such as gunicorn (https://gunicorn.org/). Flask has details on how to deploy in production (https://flask.palletsprojects.com/en/3.0.x/deploying/) including the use of an HTTP server (such as nginx) as a reverse proxy.

In production, importing CORS from flask_cors should be removed as well as the CORS(app) line of code.

 

Use case

In the School of Engineering's Remote Laboratory research group (https://www.eng.ed.ac.uk/research/facilities/remote-labs), we are developing digital tools to support learning with remote laboratories. This includes development of a Virtual Teaching Assistant to provide live support to students during remote lab activities. By using Python and the ELM API key to interact with OpenAI LLM models, we are able to embed ELM interaction in our user interfaces as well as collect and analyse data for research purposes.

 

Full code

Below is the full Python code for running this example:

from openai import OpenAI

import os

from flask import Flask, request

from flask_cors import CORS # remove in production

client = OpenAI( 

      api_key=os.environ.get("OPENAI_API_KEY"),

)

def AskELM(input_text):

      completion = client.chat.completions.create(

            messages=[

                  {      "role": "system",

                        "content": "You are a teaching assistant that only responds with reflective questioning."

                  },

            {

                        "role": "user",

                        "content": str(input_text),

            }

      ],

      model="gpt-4-turbo",

)

return completion.choices[0].message.content

def create_app():

      app = Flask(__name__)

      CORS(app) # remove in production

      @app.route("/")

      def empty():

            return {"response": "this is an empty route"}

      @app.route("/chat", methods=["GET"])

      def chat():

            try:

                  user_id = request.args.get('user_id', default='none')

                  prompt = request.args.get('prompt', default='')

                  if (user_id == 'none'):

                        return {

                        "response": "ERROR",

                        "message": 'missing user_id'

                        }, 400

                  if (prompt == ''):

                        return { "response": "ERROR",

                        "message": 'you did not ask anything'

                        }, 400

                  response = AskELM(prompt)

                  return {

                        'sender':'chatbot',

                        'text':str(response)

                  }

            except Exception as e:

                  return {

                        "response": "ERROR",

                        "message": str(e)

                  }, 500

      return app