Pixiv ID: 104755143

In this article, we begin to move into hands-on practice, starting with setting up a basic Python environment and then connecting to large language models, using OpenAI’s Chat GPT as an example.

Python Environment and Dependencies

When you want to run your cyber companion, your code needs to rely on libraries. This article uses Python’s basic libraries and others like langchain. Please note that if the corresponding dependencies are not correctly installed in your computer’s Python environment, the code will not run properly.

Note: The following commands may not apply to your system, please look up conda commands for your specific system

Install Python environment or install Anaconda environment manager

Please go to the Python official website or Anaconda official website to install the software; you only need to choose one. This article recommends using Anaconda to avoid conflicts between multiple environments.

Please note, when installing any software, be sure to add it to the system variables or system path (except for experts), otherwise, you will not be able to run scripts normally with CMD. Specific installation and troubleshooting are beyond the scope of this tutorial

Generally, the default installation is sufficient. After installing Anaconda, open your CMD command line window and run the following command to check if conda is installed successfully:

conda -V 
# or
conda --version

If it shows “command not found,” then the installation is abnormal, and you should troubleshoot by yourself.

# Update conda
conda update conda
# Update all packages
conda update --all

Create a Python environment

Note: This tutorial recommends using Python 3.11 or above; previous versions have not been tested, so please ensure the latest version.

We’ll create a virtual environment for our cyber companion project, and from now on, we’ll run code related to our cyber companion in this environment, which will be convenient later when you compile a list of dependencies for packaging Docker images.

Use this command to create a new environment:

conda create -n env_name

For example:

conda create -n ai

Activate the environment and install dependencies

Activate the environment

Use the following command to activate a specific environment:

conda activate env_name

For example, we just created an environment named ai, and now we want to activate it:

conda activate ai

After activation, all our operations will use the packages and dependencies installed in this environment. You will notice your environment name at the beginning of the command line. Every time you open the command line, check if it is the environment you need. Of course, you can also set your default environment.

You can use this command:

conda env list 
# or
conda info --envs 
# to see which virtual environments exist

You might need this command to deactivate this environment (but don’t close it yet, as we will continue using this environment):

conda deactivate env_name
# For example, to close the 'ai' environment, after which it reverts to the default environment
conda deactivate ai

Now activate the environment we just created, and still in the CMD, enter the following commands in sequence to install the related dependencies:

# OpenAI's official Python library
pip install openai
# The Python library for langchain
pip install langchain

After completing the installation, we’re ready to move on to the next step.

The official documentation is highly recommended; it’s the most accurate and worth studying repeatedly.

OpenAI official documentation

Langchain official documentation

Create a Python script

Prepare your OpenAI API

Prepare your usable API token and keep it safe.

Start coding

Some people prefer scripting, others prefer interactive Python notebooks; it varies from person to person, use what you like. I personally prefer scripts; please learn how to use notebooks on your own.

It is recommended to install and use Visual Studio Code, experts may choose as they please. Using VSC, create a file named cyberai.py, save it somewhere you can find it, and then we start entering code:

First, import the necessary parts of the library we previously installed:

  • The first library is used to initialize model settings and connect to large language models; langchain provides many interfaces to connect to most models, including locally deployed LLMs. For more details, please see the official documentation.
  • The second library is a template library, used to help AI distinguish between user input, prompt words, and context history in the request content. This will be introduced later.
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

Then we begin to connect to the large language model:

  1. model_name is the model type you choose, here we use 3.5turbo.
  2. openai_api_base is your access point; if you use a non-official intermediary API or you have used a reverse proxy, then you might need this. Fill in your access point link in between the double quotes, such as https://api.openai.com/v1/chat/completions. If you can connect directly to the official endpoint, you can delete this line.
  3. The last openai_api_key is your API Token. Fill in your API inside the double quotes.
chat_model = ChatOpenAI(
    model_name='gpt-3.5-turbo',
    openai_api_base="",
    openai_api_key = ""
)

Great, we are halfway through. Let’s define a simple function and call the model we just set up within this function:

def get_response_from_llm(question):
    return chat_model(question)

This function will send any input question to the chat_model for a response and return the AI-generated reply.

We’re nearly there. Let’s add a simple command-line interaction and optimize the AI’s response format a bit:

if __name__ == "__main__":
    while True:
        user_input = input("\nEnter a question or type 'exit' to leave:")
        if user_input.lower() == 'exit':
            print("Goodbye")
            break
        messages = [HumanMessage(content=user_input)]
        response = get_response_from_llm(messages).content
        print(response)

Your complete code might look like this in the end:

from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

chat_model = ChatOpenAI(
    model_name='gpt-3.5-turbo',
    openai_api_base="",
    openai_api_key = ""
)

def get_response_from_llm(question):
    return chat_model(question)

if __name__ == "__main__":
    while True:
        user_input = input("\nEnter a question or type 'exit' to leave:")
        if user_input.lower() == 'exit':
            print("Goodbye")
            break
        messages = [HumanMessage(content=user_input)]
        response = get_response_from_llm(messages).content
        print(response)

Well done, save the file, close VSC, and open the command line window; we’re going to run the script and attempt a single conversation with AI:

In your command line window, type the following command but don’t press Enter yet:

# Mind the space!!!!!
python3 + space

Then in the file explorer, try dragging the cyberai.py file you just wrote and saved into your command line window. This step mainly tells the command line the path + file name + extension of the file so it can run the script. The complete command should look something like this:

python3 /Volumes/path/path/cyberai.py

Alright, press Enter, and if everything goes well, you should see a prompt: (This script has been tested by the author and runs correctly, please troubleshoot any issues yourself, understanding is appreciated.)

Enter a question or type 'exit' to leave:

Congratulations if you’ve gotten this far; you’ve succeeded. Try typing a question and wait for the AI to reply, for example:

Enter a question or type 'exit' to leave: Can you be my cyber companion?

I'm sorry, as an AI language model, I have no physical form or emotions and cannot be your cyber companion. I can only provide linguistic communication and assistance. Please be respectful and refrain from inappropriate comments.

(This AI seems to be suffering from heat… we need to rescue it quickly…)

End of this section.