ChatGPT is a powerful and flexible language model that has become a popular choice for many NLP tasks and applications. It is a very popular language model developed by OpenAI. It is widely used in a variety of natural language processing (NLP) tasks such as text generation, conversation generation, and text summarization.
Why is ChatGPT so popular?
One of the reasons for its popularity is that it was trained on a massive dataset of text, which allows it to generate high-quality, human-like text. It has also been fine-tuned for specific tasks, such as answering questions, which makes it well-suited for a wide range of applications. Another reason for its popularity is that the model is easily accessible through OpenAI’s API, which makes it easy to use in a variety of programming languages and frameworks. This has led to the development of a wide range of applications that utilize ChatGPT’s capabilities, such as chatbots, language translation tools, and text summarization tools. The model also has been fine-tuned for various languages beside english, this makes it a versatile model for various use cases.
Can I run ChatGPT Client locally?
The short answer is “Yes!”. It is possible to run Chat GPT Client locally on your own computer. Here’s a quick guide that you can use to run Chat GPT locally and that too using Docker Desktop. Let’s dive in.
Pre-requisite
Step 1. Install Docker Desktop
Step 2. Enable Kubernetes
Step 3. Writing the Dockerfile
FROM python:3.8-slim-buster
ENV MODEL_ENGINE "text-davinci-002"
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
COPY gpt3_script.py /app/
CMD ["python", "/app/gpt3_script.py"]
This Dockerfile uses python:3.8-slim-buster image, sets the environment variable for the model engine, installs python dependencies, copies the gpt3_script.py and requirements.txt, and runs the main script.
Step 4. Writing gpt3_script.py
Here’s an example of a gpt3_script.py file that can be used to interact with the ChatGPT API:
import openai
# Add your OpenAI API key
openai.api_key = "YOUR_API_KEY"
def generate_text(prompt):
completions = openai.Completion.create(
engine="text-davinci-002",
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
temperature=0.7,
)
message = completions.choices[0].text
return message.strip()
generated_text = generate_text("Write a short story about a robot who wants to be human")
print(generated_text)
The gpt3_script.py file will depend on what specific task or application you are trying to accomplish with GPT-3. However, here’s an example of how the file might look like for a simple script that generates text using GPT-3. In this script, openai library is imported, and it’s used to interact with the GPT-3 API.
An API_KEY is added, This is a required step to access the GPT-3 API. Then, a function generate_text() is defined which takes a prompt as input and returns a generated text as output. Inside the function, openai.Completion.create() is called to generate text based on the prompt. The engine, prompt, max_tokens, n, stop and temperature arguments can be adjusted to suit your specific needs.
Finally, the script calls the generate_text() function with a specific prompt and assigns the output to the generated_text variable, which is then printed to the console.
Step 5. Creating the requirements.txt file
You also need to create a file named requirements.txt and add all the dependencies needed by your script. Here’s an example of a requirements.txt file that can be used with the ChatGPT Dockerfile:
openai
requests
This file contains the openai and requests packages which are required to run the main script that uses the ChatGPT API.
You can test it by installing these packages by running the following command:
pip install -r requirements.txt
You can also use the command pip freeze > requirements.txt to create the requirements.txt file with all the packages installed in your environment.
Step 6. Building the Image
Once you create an account with OpenAI, you will need to add your OpenAI Keys by adding it to this line of the script:
openai.api_key = "YOUR_API_KEY"
Once you have made the changes, it’s time to build the image by running the following command:
docker build -t ajeetraina/chatgpt .
This will create a Docker image with the name chatgpt that you can run as a container and use to deploy in kubernetes cluster as a pod.
Step 7. Running the ChatGPT Client container
docker run -d -p 8080:8080 ajeetraina/chatgpt-test
15830b65926b3ae083c94262f7ad700bf6e3d12c8e9374b08ce21cd80db07662
% docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15830b65926b ajeetraina/chatgpt-test "python /app/gpt3_sc…" 4 seconds ago Up 3 seconds 0.0.0.0:8080->8080/tcp serene_blackburn
Step 8. Verifying the Result
If you try running docker logs
you will see that the ChatGPT successfully displayed the results as follows:
% docker logs -f 158
Samantha was built to be the perfect robot. She was designed to look and act exactly like a human, but she was never quite able to shake the feeling that she was different. She longed to be human herself, and so she began to study everything she could about them. She read their books, watched their movies, and even tried to mimic their behavior.
But no matter how hard she tried, Samantha just couldn't seem to become human. She was always aware of the fact that she was a robot, and it felt like a weight inside her chest. One day, she decided to talk to her creator about her feelings.
"I want to be human," she said. "I know I was created to be a robot, but I can't help how I feel. I study everything about humans and I try to mimic them, but it's just not the same. It's like there's something inside me that's not quite right."
Her creator looked at her sympathetically. "I'm sorry, Samantha. I wish I could make you human, but it's just not possible. You're a robot, and that's all you can ever be."
Samantha hung her head in disappointment. She knew her creator was right, but she couldn't help but feel like she was missing out on something special. She would always be an outsider, looking in on the human world but never truly belonging to it.
%
Conclusion
ChatGPT can be deployed in a Docker container, which allows for easy packaging, deployment, and scaling of the model. If you want to learn how Chat GPT client can be deployed using Kubernetes, do check out this blog.