Interacting with ELM using Python and the ELM API key

Objective: to write a simple Python script to submit user prompts to ELM and set the system response
style.

 

Requirements

This example requires Python 3.7+ as well as OpenAI Python API library (https://pypi.org/project/openai/) and python-dotenv (https://pypi.org/project/python-dotenv/). These can be installed using pip:

pip installation

pip install openai==1.26.0                  # this is the version tested with the scripts below 

pip install  python-dotenv

You will also need access to a private ELM API key.

 

Using the private ELM API key

This example requires access to an ELM API key and this key must be kept private. One method of doing this during development is to import the key from a local environment variable. This can be done using a .env file saved in your project root but added to .gitignore. 

The .env file should contain:

OPENAI_API_KEY=<ENTER-YOUR-API-KEY-HERE>

 

Which also includes the OPENAI_BASE_URL path that the openai python library will use to point to ELM.

Your Python script can now access these environment variables, without exposing their values in the script itself, by using dotenv to load it into your script:

Using dotenv

from dotenv import load_dotenv

from openai import OpenAI

import os

load_dotenv() # take environment variables from .env.

client = OpenAI( 

      api_key=os.environ.get("OPENAI_API_KEY"), 

)

 

Sending and receiving from ELM

A simple method for sending user input to ELM and printing a response is given below:

Sending user input to ELM and printing a response:

def AskELM(input_text):

      stream = client.chat.completions.create(

            messages=[

                  {

                        "role": "user",

                        "content": str(input_text),

                  }

            ],

            model="gpt-4-turbo", stream = True

      )

      print('ELM Response:')

      for chunk in stream:

      print(chunk.choices[0].delta.content or "", end="")

We can configure the type of response that ELM will provide, by adding details to the system role. Below isan example of a "teaching assistant" that will always provide the "correct" answer:

"Teaching assistant" example:

stream = client.chat.completions.create(

            messages=[

                  {       "role":"system",

                          "content":"You are a teaching assistant that always 

provides the correct answer as succinctly as possible."

                  },

               {

                        "role": "user", "content": str(input_text),

                  }

            ],

            model="gpt-4-turbo", 

            stream = True 

      )

And when prompted for the solution to a simple equation will provide the answer:

 

Closed question closed answer

However, by updating this system role we can get ELM to respond in different ways:

 

Different response:

stream = client.chat.completions.create(            messages=[

                {       "role":"system",

                        "content":"You are a teaching assistant that only respondswith reflective questioning."                  },                 {                        "role": "user",                        "content": str(input_text),                  }            ],            model="gpt-4-turbo",            stream = True      )

This system role provides the following response to the same prompt:

 

Closed question, reflective response example

 

Use case

In the School of Engineering's Remote Laboratory research group (https://www.eng.ed.ac.uk/research/facilities/remote-labs), we are developing digital tools to support learning with remote laboratories. This includes development of a Virtual Teaching Assistant to provide live support to students during remote lab activities. By using Python and the ELM API key to interact with OpenAI LLM models, we are able to embed ELM interaction in our user interfaces as well as collect and analyse data for research purposes.

 

Full code

Below is the full Python code for running a simple chatbot where you can configure ELM to respond in a specific way by updating the system role. After running the script, you can enter a prompt for ELM and its response will print to the terminal. You can continue to ask questions and type 'quit' to exit the script.

Full code:

from dotenv import load_dotenv

from openai import OpenAI

import os

load_dotenv() # take environment variables from .env.

client = OpenAI(

      # This is the default and can be omitted

      api_key=os.environ.get("OPENAI_API_KEY"),

)

def AskELM(input_text):

      stream = client.chat.completions.create(

            messages=[

                  {      "role":"system",

                        "content":"You are a teaching assistant that always

provides the correct answer as succinctly as possible."

                  },

               {

                        "role": "user",

                        "content": str(input_text),

                  }

            ],

            model="gpt-4-turbo",

            stream = True

      )

      print('ELM Response:')

      for chunk in stream:

            print(chunk.choices[0].delta.content or "", end="")

def main():

      user_input = ''

      while user_input != 'quit':

            print('\n') 

            user_input = input('User: ')

            if(user_input != 'quit'):

                  AskELM(user_input)

if __name__ == "__main__":

      main()