Compare commits

...

19 Commits

Author SHA1 Message Date
0xacx 192434b5a6
Update README.md 1 year ago
0xacx b7833e6e25
Merge pull request #101 from 0xacx/multiline-prompt-chat-mode 1 year ago
Achilleas 80693a4d86 Remove sed due to warning, replace with escape function, rename variables, autoformat 1 year ago
0xacx 24a0de13d3
Update README.md 1 year ago
Achilleas f5f7f1bb84 Merge branch 'main' into multiline-prompt-chat-mode 1 year ago
0xacx 5e572eb140
Merge pull request #100 from np/safe-escaping 1 year ago
Achilleas 3236de2f23 Rename to multiline prompt 1 year ago
Achilleas 0a1ca89e6c formatting 1 year ago
Achilleas 3ba434ee87 Keep old OPENAI key name to maintain compatibility with the install script, add identation in elif 1 year ago
0xacx 7b09feaf94
Merge pull request #90 from camAtGitHub/main 1 year ago
Nicolas Pouillard cbc31b57cf Safer quoting on $COLUMNS 1 year ago
Nicolas Pouillard 3b0cd946ce Safe escaping using jq 1 year ago
Nicolas Pouillard 4f1f92d022 Refactoring to reduce the use global variables from functions 1 year ago
Nicolas Pouillard 60eb98d7b3 Safer quoting 1 year ago
camAtGitHub 1cf6d04b45 rename(OPENAI_KEY): OPENAI_API_KEY is standardised across projects 1 year ago
camAtGitHub 24fc6bf52b feature(big-prompt): allow multi-line input during chat mode 1 year ago
camAtGitHub 798e240b56 optimize(exit): streamline exit conditions 1 year ago
camAtGitHub f040fe8177 add(list_models): Models can be queried via cli argument 1 year ago
camAtGitHub 26081ad319 refactor(usage()) Align help output 1 year ago
  1. 2
      README.md
  2. 210
      chatgpt.sh

@ -163,7 +163,7 @@ This script relies on curl for the requests to the api and jq to parse the json
## Contributors ## Contributors
:pray: Thanks to all the people who used, tested, submitted issues, PRs and proposed changes: :pray: Thanks to all the people who used, tested, submitted issues, PRs and proposed changes:
[pfr-dev](https://www.github.com/pfr-dev), [jordantrizz](https://www.github.com/jordantrizz), [se7en-x230](https://www.github.com/se7en-x230), [mountaineerbr](https://www.github.com/mountaineerbr), [oligeo](https://www.github.com/oligeo), [biaocy](https://www.github.com/biaocy), [dmd](https://www.github.com/dmd), [goosegit11](https://www.github.com/goosegit11), [dilatedpupils](https://www.github.com/dilatedpupils), [direster](https://www.github.com/direster), [rxaviers](https://www.github.com/rxaviers), [Zeioth](https://www.github.com/Zeioth), [edshamis](https://www.github.com/edshamis), [nre-ableton](https://www.github.com/nre-ableton), [TobiasLaving](https://www.github.com/TobiasLaving), [RexAckermann](https://www.github.com/RexAckermann), [emirkmo](https://www.github.com/emirkmo) [pfr-dev](https://www.github.com/pfr-dev), [jordantrizz](https://www.github.com/jordantrizz), [se7en-x230](https://www.github.com/se7en-x230), [mountaineerbr](https://www.github.com/mountaineerbr), [oligeo](https://www.github.com/oligeo), [biaocy](https://www.github.com/biaocy), [dmd](https://www.github.com/dmd), [goosegit11](https://www.github.com/goosegit11), [dilatedpupils](https://www.github.com/dilatedpupils), [direster](https://www.github.com/direster), [rxaviers](https://www.github.com/rxaviers), [Zeioth](https://www.github.com/Zeioth), [edshamis](https://www.github.com/edshamis), [nre-ableton](https://www.github.com/nre-ableton), [TobiasLaving](https://www.github.com/TobiasLaving), [RexAckermann](https://www.github.com/RexAckermann), [emirkmo](https://www.github.com/emirkmo), [np](https://www.github.com/np), [camAtGitHub](https://github.com/camAtGitHub)
## Contributing ## Contributing
Contributions are very welcome! Contributions are very welcome!

@ -12,7 +12,6 @@ CHATGPT_CYAN_LABEL="\033[36mchatgpt \033[0m"
PROCESSING_LABEL="\n\033[90mProcessing... \033[0m\033[0K\r" PROCESSING_LABEL="\n\033[90mProcessing... \033[0m\033[0K\r"
OVERWRITE_PROCESSING_LINE=" \033[0K\r" OVERWRITE_PROCESSING_LINE=" \033[0K\r"
if [[ -z "$OPENAI_KEY" ]]; then if [[ -z "$OPENAI_KEY" ]]; then
echo "You need to set your OPENAI_KEY to use this script" echo "You need to set your OPENAI_KEY to use this script"
echo "You can set it temporarily by running this on your terminal: export OPENAI_KEY=YOUR_KEY_HERE" echo "You can set it temporarily by running this on your terminal: export OPENAI_KEY=YOUR_KEY_HERE"
@ -37,15 +36,34 @@ Commands:
*If a command modifies your file system or dowloads external files the script will show a warning before executing. *If a command modifies your file system or dowloads external files the script will show a warning before executing.
Options: Options:
-i, --init-prompt - Provide initial chat prompt to use in context -i, --init-prompt Provide initial chat prompt to use in context
--init-prompt-from-file - Provide initial prompt from file
-p, --prompt - Provide prompt instead of starting chat --init-prompt-from-file Provide initial prompt from file
--prompt-from-file - Provide prompt from file
-t, --temperature - Temperature -p, --prompt Provide prompt instead of starting chat
--max-tokens - Max number of tokens
-m, --model - Model --prompt-from-file Provide prompt from file
-s, --size - Image size. (The sizes that are accepted by the OpenAI API are 256x256, 512x512, 1024x1024)
-c, --chat-context - For models that do not support chat context by default (all models except gpt-3.5-turbo and gpt-4), you can enable chat context, for the model to remember your previous questions and its previous answers. It also makes models aware of todays date and what data it was trained on. -b, --big-prompt Allow multi-line prompts during chat mode
-t, --temperature Temperature
--max-tokens Max number of tokens
-l, --list List available openAI models
-m, --model Model to use
-s, --size Image size. (The sizes that are accepted by the
OpenAI API are 256x256, 512x512, 1024x1024)
-c, --chat-context For models that do not support chat context by
default (all models except gpt-3.5-turbo and
gpt-4), you can enable chat context, for the
model to remember your previous questions and
its previous answers. It also makes models
aware of todays date and what data it was trained
on.
EOF EOF
} }
@ -54,33 +72,44 @@ EOF
# $1 should be the response body # $1 should be the response body
handle_error() { handle_error() {
if echo "$1" | jq -e '.error' >/dev/null; then if echo "$1" | jq -e '.error' >/dev/null; then
echo -e "Your request to Open AI API failed: \033[0;31m$(echo $1 | jq -r '.error.type')\033[0m" echo -e "Your request to Open AI API failed: \033[0;31m$(echo "$1" | jq -r '.error.type')\033[0m"
echo $1 | jq -r '.error.message' echo "$1" | jq -r '.error.message'
exit 1 exit 1
fi fi
} }
# request to openAI API models endpoint. Returns a list of models
# takes no input parameters
list_models() {
models_response=$(curl https://api.openai.com/v1/models \
-sS \
-H "Authorization: Bearer $OPENAI_KEY")
handle_error "$models_response"
models_data=$(echo $models_response | jq -r -C '.data[] | {id, owned_by, created}')
echo -e "$OVERWRITE_PROCESSING_LINE"
echo -e "${CHATGPT_CYAN_LABEL}This is a list of models currently available at OpenAI API:\n ${models_data}"
}
# request to OpenAI API completions endpoint function # request to OpenAI API completions endpoint function
# $1 should be the request prompt # $1 should be the request prompt
request_to_completions() { request_to_completions() {
request_prompt="$1" local prompt="$1"
response=$(curl https://api.openai.com/v1/completions \ curl https://api.openai.com/v1/completions \
-sS \ -sS \
-H 'Content-Type: application/json' \ -H 'Content-Type: application/json' \
-H "Authorization: Bearer $OPENAI_KEY" \ -H "Authorization: Bearer $OPENAI_KEY" \
-d '{ -d '{
"model": "'"$MODEL"'", "model": "'"$MODEL"'",
"prompt": "'"${request_prompt}"'", "prompt": "'"$prompt"'",
"max_tokens": '$MAX_TOKENS', "max_tokens": '$MAX_TOKENS',
"temperature": '$TEMPERATURE' "temperature": '$TEMPERATURE'
}') }'
} }
# request to OpenAI API image generations endpoint function # request to OpenAI API image generations endpoint function
# $1 should be the prompt # $1 should be the prompt
request_to_image() { request_to_image() {
prompt="$1" local prompt="$1"
image_response=$(curl https://api.openai.com/v1/images/generations \ image_response=$(curl https://api.openai.com/v1/images/generations \
-sS \ -sS \
-H 'Content-Type: application/json' \ -H 'Content-Type: application/json' \
@ -95,8 +124,8 @@ request_to_image() {
# request to OpenAPI API chat completion endpoint function # request to OpenAPI API chat completion endpoint function
# $1 should be the message(s) formatted with role and content # $1 should be the message(s) formatted with role and content
request_to_chat() { request_to_chat() {
message="$1" local message="$1"
response=$(curl https://api.openai.com/v1/chat/completions \ curl https://api.openai.com/v1/chat/completions \
-sS \ -sS \
-H 'Content-Type: application/json' \ -H 'Content-Type: application/json' \
-H "Authorization: Bearer $OPENAI_KEY" \ -H "Authorization: Bearer $OPENAI_KEY" \
@ -108,35 +137,36 @@ request_to_chat() {
], ],
"max_tokens": '$MAX_TOKENS', "max_tokens": '$MAX_TOKENS',
"temperature": '$TEMPERATURE' "temperature": '$TEMPERATURE'
}') }'
} }
# build chat context before each request for /completions (all models except # build chat context before each request for /completions (all models except
# gpt turbo and gpt 4) # gpt turbo and gpt 4)
# $1 should be the chat context # $1 should be the escaped request prompt,
# $2 should be the escaped prompt # it extends $chat_context
build_chat_context() { build_chat_context() {
chat_context="$1" local escaped_request_prompt="$1"
escaped_prompt="$2"
if [ -z "$chat_context" ]; then if [ -z "$chat_context" ]; then
chat_context="$CHAT_INIT_PROMPT\nQ: $escaped_prompt" chat_context="$CHAT_INIT_PROMPT\nQ: $escaped_request_prompt"
else else
chat_context="$chat_context\nQ: $escaped_prompt" chat_context="$chat_context\nQ: $escaped_request_prompt"
fi fi
request_prompt="${chat_context//$'\n'/\\n}" }
escape() {
echo "$1" | jq -Rrs 'tojson[1:-1]'
} }
# maintain chat context function for /completions (all models except # maintain chat context function for /completions (all models except
# gpt turbo and gpt 4) # gpt turbo and gpt 4)
# builds chat context from response, # builds chat context from response,
# keeps chat context length under max token limit # keeps chat context length under max token limit
# $1 should be the chat context # * $1 should be the escaped response data
# $2 should be the response data (only the text) # * it extends $chat_context
maintain_chat_context() { maintain_chat_context() {
chat_context="$1" local escaped_response_data="$1"
response_data="$2"
# add response to chat context as answer # add response to chat context as answer
chat_context="$chat_context${chat_context:+\n}\nA: ${response_data//$'\n'/\\n}" chat_context="$chat_context${chat_context:+\n}\nA: $escaped_response_data"
# check prompt length, 1 word =~ 1.3 tokens # check prompt length, 1 word =~ 1.3 tokens
# reserving 100 tokens for next user prompt # reserving 100 tokens for next user prompt
while (($(echo "$chat_context" | wc -c) * 1, 3 > (MAX_TOKENS - 100))); do while (($(echo "$chat_context" | wc -c) * 1, 3 > (MAX_TOKENS - 100))); do
@ -149,36 +179,29 @@ maintain_chat_context() {
# build user chat message function for /chat/completions (gpt models) # build user chat message function for /chat/completions (gpt models)
# builds chat message before request, # builds chat message before request,
# $1 should be the chat message # $1 should be the escaped request prompt,
# $2 should be the escaped prompt # it extends $chat_message
build_user_chat_message() { build_user_chat_message() {
chat_message="$1" local escaped_request_prompt="$1"
escaped_prompt="$2"
if [ -z "$chat_message" ]; then if [ -z "$chat_message" ]; then
chat_message="{\"role\": \"user\", \"content\": \"$escaped_prompt\"}" chat_message="{\"role\": \"user\", \"content\": \"$escaped_request_prompt\"}"
else else
chat_message="$chat_message, {\"role\": \"user\", \"content\": \"$escaped_prompt\"}" chat_message="$chat_message, {\"role\": \"user\", \"content\": \"$escaped_request_prompt\"}"
fi fi
request_prompt="$chat_message"
} }
# adds the assistant response to the message in (chatml) format # adds the assistant response to the message in (chatml) format
# for /chat/completions (gpt models) # for /chat/completions (gpt models)
# keeps messages length under max token limit # keeps messages length under max token limit
# $1 should be the chat message # * $1 should be the escaped response data
# $2 should be the response data (only the text) # * it extends and potentially shrinks $chat_message
add_assistant_response_to_chat_message() { add_assistant_response_to_chat_message() {
chat_message="$1" local escaped_response_data="$1"
local local_response_data="$2"
# replace new line characters from response with space
local_response_data=$(echo "$local_response_data" | tr '\n' ' ')
# add response to chat context as answer # add response to chat context as answer
chat_message="$chat_message, {\"role\": \"assistant\", \"content\": \"$local_response_data\"}" chat_message="$chat_message, {\"role\": \"assistant\", \"content\": \"$escaped_response_data\"}"
# transform to json array to parse with jq # transform to json array to parse with jq
chat_message_json="[ $chat_message ]" local chat_message_json="[ $chat_message ]"
# check prompt length, 1 word =~ 1.3 tokens # check prompt length, 1 word =~ 1.3 tokens
# reserving 100 tokens for next user prompt # reserving 100 tokens for next user prompt
while (($(echo "$chat_message" | wc -c) * 1, 3 > (MAX_TOKENS - 100))); do while (($(echo "$chat_message" | wc -c) * 1, 3 > (MAX_TOKENS - 100))); do
@ -224,6 +247,10 @@ while [[ "$#" -gt 0 ]]; do
shift shift
shift shift
;; ;;
-l | --list)
list_models
exit 0
;;
-m | --model) -m | --model)
MODEL="$2" MODEL="$2"
shift shift
@ -234,6 +261,10 @@ while [[ "$#" -gt 0 ]]; do
shift shift
shift shift
;; ;;
--multi-line-prompt)
MULTI_LINE_PROMPT=true
shift
;;
-c | --chat-context) -c | --chat-context)
CONTEXT=true CONTEXT=true
shift shift
@ -255,6 +286,13 @@ MAX_TOKENS=${MAX_TOKENS:-1024}
MODEL=${MODEL:-gpt-3.5-turbo} MODEL=${MODEL:-gpt-3.5-turbo}
SIZE=${SIZE:-512x512} SIZE=${SIZE:-512x512}
CONTEXT=${CONTEXT:-false} CONTEXT=${CONTEXT:-false}
MULTI_LINE_PROMPT=${MULTI_LINE_PROMPT:-false}
# create our temp file for multi-line input
if [ $MULTI_LINE_PROMPT = true ]; then
USER_INPUT_TEMP_FILE=$(mktemp)
trap 'rm -f ${USER_INPUT}' EXIT
fi
# create history file # create history file
if [ ! -f ~/.chatgpt_history ]; then if [ ! -f ~/.chatgpt_history ]; then
@ -279,9 +317,16 @@ fi
while $running; do while $running; do
if [ -z "$pipe_mode_prompt" ]; then if [ -z "$pipe_mode_prompt" ]; then
echo -e "\nEnter a prompt:" if [ $MULTI_LINE_PROMPT = true ]; then
read -e prompt echo -e "\nEnter a prompt: (Press Enter then Ctrl-D to send)"
if [ "$prompt" != "exit" ] && [ "$prompt" != "q" ]; then cat > "${USER_INPUT_TEMP_FILE}"
input_from_temp_file=$(cat "${USER_INPUT_TEMP_FILE}")
prompt=$(escape "$input_from_temp_file")
else
echo -e "\nEnter a prompt:"
read -e prompt
fi
if [[ ! $prompt =~ ^(exit|q)$ ]]; then
echo -ne $PROCESSING_LABEL echo -ne $PROCESSING_LABEL
fi fi
else else
@ -291,12 +336,12 @@ while $running; do
CHATGPT_CYAN_LABEL="" CHATGPT_CYAN_LABEL=""
fi fi
if [ "$prompt" == "exit" ] || [ "$prompt" == "q" ]; then if [[ $prompt =~ ^(exit|q)$ ]]; then
running=false running=false
elif [[ "$prompt" =~ ^image: ]]; then elif [[ "$prompt" =~ ^image: ]]; then
request_to_image "$prompt" request_to_image "$prompt"
handle_error "$image_response" handle_error "$image_response"
image_url=$(echo $image_response | jq -r '.data[0].url') image_url=$(echo "$image_response" | jq -r '.data[0].url')
echo -e "$OVERWRITE_PROCESSING_LINE" echo -e "$OVERWRITE_PROCESSING_LINE"
echo -e "${CHATGPT_CYAN_LABEL}Your image was created. \n\nLink: ${image_url}\n" echo -e "${CHATGPT_CYAN_LABEL}Your image was created. \n\nLink: ${image_url}\n"
@ -318,13 +363,7 @@ while $running; do
elif [[ "$prompt" == "history" ]]; then elif [[ "$prompt" == "history" ]]; then
echo -e "\n$(cat ~/.chatgpt_history)" echo -e "\n$(cat ~/.chatgpt_history)"
elif [[ "$prompt" == "models" ]]; then elif [[ "$prompt" == "models" ]]; then
models_response=$(curl https://api.openai.com/v1/models \ list_models
-sS \
-H "Authorization: Bearer $OPENAI_KEY")
handle_error "$models_response"
models_data=$(echo $models_response | jq -r -C '.data[] | {id, owned_by, created}')
echo -e "$OVERWRITE_PROCESSING_LINE"
echo -e "${CHATGPT_CYAN_LABEL}This is a list of models currently available at OpenAI API:\n ${models_data}"
elif [[ "$prompt" =~ ^model: ]]; then elif [[ "$prompt" =~ ^model: ]]; then
models_response=$(curl https://api.openai.com/v1/models \ models_response=$(curl https://api.openai.com/v1/models \
-sS \ -sS \
@ -334,15 +373,12 @@ while $running; do
echo -e "$OVERWRITE_PROCESSING_LINE" echo -e "$OVERWRITE_PROCESSING_LINE"
echo -e "${CHATGPT_CYAN_LABEL}Complete details for model: ${prompt#*model:}\n ${model_data}" echo -e "${CHATGPT_CYAN_LABEL}Complete details for model: ${prompt#*model:}\n ${model_data}"
elif [[ "$prompt" =~ ^command: ]]; then elif [[ "$prompt" =~ ^command: ]]; then
# escape quotation marks # escape quotation marks, new lines, backslashes...
escaped_prompt=$(echo "$prompt" | sed 's/"/\\"/g') escaped_prompt=$(escape "$prompt")
# escape new lines escaped_prompt=${escaped_prompt#command:}
if [[ "$prompt" =~ ^command: ]]; then request_prompt=$COMMAND_GENERATION_PROMPT$escaped_prompt
escaped_prompt=${prompt#command:} build_user_chat_message "$request_prompt"
request_prompt=$COMMAND_GENERATION_PROMPT${escaped_prompt//$'\n'/' '} response=$(request_to_chat "$chat_message")
fi
build_user_chat_message "$chat_message" "$request_prompt"
request_to_chat "$request_prompt"
handle_error "$response" handle_error "$response"
response_data=$(echo $response | jq -r '.choices[].message.content') response_data=$(echo $response | jq -r '.choices[].message.content')
@ -363,20 +399,17 @@ while $running; do
eval $response_data eval $response_data
fi fi
fi fi
escaped_response_data=$(echo "$response_data" | sed 's/"/\\"/g') add_assistant_response_to_chat_message "$(escape "$response_data")"
add_assistant_response_to_chat_message "$chat_message" "$escaped_response_data"
timestamp=$(date +"%d/%m/%Y %H:%M") timestamp=$(date +"%d/%m/%Y %H:%M")
echo -e "$timestamp $prompt \n$response_data \n" >>~/.chatgpt_history echo -e "$timestamp $prompt \n$response_data \n" >>~/.chatgpt_history
elif [[ "$MODEL" =~ ^gpt- ]]; then elif [[ "$MODEL" =~ ^gpt- ]]; then
# escape quotation marks # escape quotation marks, new lines, backslashes...
escaped_prompt=$(echo "$prompt" | sed 's/"/\\"/g') request_prompt=$(escape "$prompt")
# escape new lines
request_prompt=${escaped_prompt//$'\n'/' '}
build_user_chat_message "$chat_message" "$request_prompt" build_user_chat_message "$request_prompt"
request_to_chat "$request_prompt" response=$(request_to_chat "$chat_message")
handle_error "$response" handle_error "$response"
response_data=$(echo "$response" | jq -r '.choices[].message.content') response_data=$(echo "$response" | jq -r '.choices[].message.content')
@ -385,26 +418,22 @@ while $running; do
if command -v glow &>/dev/null; then if command -v glow &>/dev/null; then
echo -e "${CHATGPT_CYAN_LABEL}" echo -e "${CHATGPT_CYAN_LABEL}"
echo "${response_data}" | glow - echo "${response_data}" | glow -
#echo -e "${formatted_text}"
else else
echo -e "${CHATGPT_CYAN_LABEL}${response_data}" | fold -s -w $COLUMNS echo -e "${CHATGPT_CYAN_LABEL}${response_data}" | fold -s -w "$COLUMNS"
fi fi
escaped_response_data=$(echo "$response_data" | sed 's/"/\\"/g') add_assistant_response_to_chat_message "$(escape "$response_data")"
add_assistant_response_to_chat_message "$chat_message" "$escaped_response_data"
timestamp=$(date +"%d/%m/%Y %H:%M") timestamp=$(date +"%d/%m/%Y %H:%M")
echo -e "$timestamp $prompt \n$response_data \n" >>~/.chatgpt_history echo -e "$timestamp $prompt \n$response_data \n" >>~/.chatgpt_history
else else
# escape quotation marks # escape quotation marks, new lines, backslashes...
escaped_prompt=$(echo "$prompt" | sed 's/"/\\"/g') request_prompt=$(escape "$prompt")
# escape new lines
request_prompt=${escaped_prompt//$'\n'/' '}
if [ "$CONTEXT" = true ]; then if [ "$CONTEXT" = true ]; then
build_chat_context "$chat_context" "$escaped_prompt" build_chat_context "$request_prompt"
fi fi
request_to_completions "$request_prompt" response=$(request_to_completions "$request_prompt")
handle_error "$response" handle_error "$response"
response_data=$(echo "$response" | jq -r '.choices[].text') response_data=$(echo "$response" | jq -r '.choices[].text')
@ -414,14 +443,13 @@ while $running; do
echo -e "${CHATGPT_CYAN_LABEL}" echo -e "${CHATGPT_CYAN_LABEL}"
echo "${response_data}" | glow - echo "${response_data}" | glow -
else else
# else remove empty lines and print # else remove empty lines and print
formatted_text=$(echo "${response_data}" | sed '1,2d; s/^A://g') formatted_text=$(echo "${response_data}" | sed '1,2d; s/^A://g')
echo -e "${CHATGPT_CYAN_LABEL}${formatted_text}" | fold -s -w $COLUMNS echo -e "${CHATGPT_CYAN_LABEL}${formatted_text}" | fold -s -w $COLUMNS
fi fi
if [ "$CONTEXT" = true ]; then if [ "$CONTEXT" = true ]; then
escaped_response_data=$(echo "$response_data" | sed 's/"/\\"/g') maintain_chat_context "$(escape "$response_data")"
maintain_chat_context "$chat_context" "$escaped_response_data"
fi fi
timestamp=$(date +"%d/%m/%Y %H:%M") timestamp=$(date +"%d/%m/%Y %H:%M")

Loading…
Cancel
Save