This is an HTML5 template that integrates with the Chat GPT APIs. Our goal is to provide an easy-to-use and highly customizable API code that allows you to train and add your own artificial intelligence solutions to the prompts.
Our product is customizable, allowing you to add or modify intelligent AI through prompts. With the aim of providing creative responses, our intelligent model with Chat GPT uses the theme of astrology to provide users with nine intelligent oracle prompts that include: Dream Meaning, My Zodiac Information, Tarot Reading (Major Arcana), Get Your Numerology Reading, Know Your Vocational Map, Discover Your Power Animal, Create your birth chart, Love Calculator and Chinese Zodiac.
To better meet your needs, it is possible to add or modify any of the nine resources already available.
This tutorial is text-based, but if you prefer to watch a video on how to install, please access the link below:
https://www.youtube.com/watch?v=XMEKp67CHuQ
To use the Chat GPT API in conjunction with the AI Oracle, you need an OpenAI API key.
Follow the steps below to create a key:
Access the OpenAI website and create an account.
https:/platform.openai.com/account/api-keys
After creating your account, log in to the OpenAI platform.
On the main page, locate the "API keys" button in the navigation menu and click on it.
Click on "Generate API key" to create a new API key.
Copy the generated API key and store it in a secure location.
Open the "php" folder in the files you downloaded.
Locate the "key.php" file inside the "php" folder.
Open the "key.php" file using a text editor, such as Notepad.
Paste the API key you generated on the OpenAI website into the location indicated inside the "key.php" file.
Save the key.php file and your configuration will be ready to go.
It's important to note that it's not possible to run the project from a folder on your computer.
To test your project, it's important that you put it on an HTTP server with PHP 7 or higher. Additionally, SSL must be enabled on your server.
You can choose to put your project on a local server, such as WAMP or XAMPP, or you can host it on an online site with a PHP server.
This will allow you to run your project without any issues and ensure that it works properly.
Remember that it's important to choose a server that's compatible with your project's requirements and is configured correctly to avoid any potential issues.
With this, you'll be able to test your project safely and efficiently.
After setting up your project on an HTTP server, you can test it by accessing your website's address.
From there, simply choose a widget and send a test message to the Oracle. This will allow you to check if your project is working properly and if the features are operating as expected.
The project already comes with 9 pre-made examples of AI prompts ready for use.
If you want to add, remove or modify a specific prompt, you will need to access the prompts-en.json file located in the json folder.
Note that we have 3 prompt files, prompts-en.json, prompts-es.json and prompts-pt-br.json,
Which have the same content but in different languages (English, Portuguese and Spanish).
For this example, we will use the default language, which is English (prompts-en.json).
You can then open this file in a text editor and make the necessary changes.
On the previous page, we summarized the parameters in the prompts-en.json file.
Among them, the training, temperature, frequency_penalty, and presence_penalty parameters stand out, which are essential for the proper functioning of the project. Below, we will detail each one of them.
Training: This parameter is responsible for defining the training of the AI.
It is the text that the AI will use to introduce itself and identify itself as an expert in a certain subject.
For example, for the widget 'My Zodiac Information', the following prompt is used:"
"training": "Be an Oracle who knows everything about signs and the zodiac. You are capable of providing accurate and helpful information about astrology and zodiac signs. [...]"
When writing in the training field, the oracle will follow the provided instructions. Additionally, you can also specify negations, instructing the Oracle not to respond to questions outside the scope or on a certain subject.
It is possible to define the tone of language that the Oracle will use when responding.
For example, you can direct the Oracle to always respond in an objective, funny, or detailed manner.
When writing in the training field, you can set actions for the Oracle and check its response.
If you are not satisfied, you can modify the training field and continue testing until you get the desired result.
Improving the training of a character depends on you: write in the training field, perform tests, and check if it meets your expectations.
Remember that you will have to do this for each widget in the JSON file.
Temperature: The temperature parameter is a hyperparameter used in language generation models, including those available on the OpenAI platform, such as GPT-2 and GPT-3.
This parameter controls the creativity and diversity of the text generated by the model.
Basically, temperature affects the probability of choosing the next word when the model is generating text.
Lower temperature values cause the model to choose the most likely words, according to the probability distribution learned during training, resulting in a more predictable and conservative text.
On the other hand, higher temperature values make word choice less predictable, allowing the model to produce more creative and diverse text, with more variation compared to previously generated text.
It is important to remember that a very high value for temperature can lead to incoherent or meaningless results, as the model may choose highly unlikely words.
Therefore, the appropriate value for temperature should be chosen carefully, depending on the type of task or application in question.In general, we recommend temperature between 0.7.
However, ideal values may vary depending on the model, task, and application domain, so feel free to experiment with values and test them yourself.
Through the prompts-en.json file, it is possible to set the temperature individually for each AI:
Both the "frequency_penalty" and "presence_penalty" parameters are used to control text generation in language models like GPT.
The main difference between them is that "frequency_penalty" is used to control the frequency of repeated words in a generated sequence, while "presence_penalty" is used to control the presence of specific words in a generated sequence.
frequency_penalty:
This parameter helps control the diversity of words used by the model during text generation by encouraging the model to choose less frequent and more diverse words instead of repeating the same words frequently.
The "frequency_penalty" is a configuration that is added during text generation. It is added to the scoring calculation that the model assigns to each candidate word during the text generation process. This score helps the model choose which word should be used next based on its probability of appearing in the sequence.
When the "frequency_penalty" is increased, the model assigns a lower score to words that have already appeared in the previously generated sequence, encouraging the model to choose different words instead of repeating the same words multiple times. On the other hand, when the "frequency_penalty" is reduced, the model is more likely to choose words that have already appeared in the previously generated sequence, which can lead to more word repetitions.
presence_penalty:
This parameter is a measure of how strongly the model should penalize the repetitive use of words and phrases in its output. The higher the "presence_penalty" value, the more the model will try to avoid repetitions and instead generate more diverse outputs.
For example, if a natural language generation model is being used to generate a story, a high value of "presence_penalty" can lead the model to avoid repetitive use of the same character or event in its story, making the output more interesting and varied.
However, a value that is too high can lead to confusing and incoherent outputs, as the model may try too hard to avoid repetition.
To access the project configuration options, you need to open the config.json file located inside the json folder.
When modifying the config.json file, it's important that you change the text that comes after the JSON key. Below, we explain the meaning of each parameter:
API_MODEL_options_available:
A list of available AI models that can be used by the chatbot, along with a brief description of each model.
use_text_stream:
A boolean value indicating whether the chat messages should be displayed in real-time or not.
display_contacts_user_list:
A boolean value indicating whether a list of contacts should be displayed in the chat interface.
display_avatar_in_chat:
A boolean value indicating whether the avatar of the chatbot should be displayed in the chat interface.
display_copy_text_button_in_chat:
A boolean value indicating whether a button for copying chat messages should be displayed in the chat interface.
display_audio_button_answers:
A boolean value indicating whether a button for audio answers should be displayed in the chat interface.
display_microphone_in_chat:
A boolean value indicating whether a button for using the microphone should be displayed in the chat interface.
microphone_speak_lang:
The language code for the language that the microphone should recognize.
filter_badwords:
A boolean value indicating whether to filter out bad words from chat messages.
chat_history:
A boolean value indicating whether to save chat history.
chat_font_size:
The font size for the chat interface.
shuffle_character:
A boolean value that indicates whether the way AI are displayed will be random.
dalle_img_size:
The size of the image that will be generated by the DALL-E model.
dalle_generated_img_count:
The number of images that will be generated by the DALL-E model.
dalle_img_size_available: The available image sizes for the DALL-E model.
In this model, it is possible to use various chat GPT variants. See below the list of available models:
We recommend using the gpt-3.5-turbo model, which is the most cost-effective and faster option. To use gpt-4 models, you need to have a specific API key for that model.
gpt-3.5-turbo
Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003. Will be updated with our latest model iteration 2 weeks after it is released.
4,096 tokens
gpt-3.5-turbo-16k
Same capabilities as the standard gpt-3.5-turbo model but with 4 times the context.
16,384 tokens
gpt-4
More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration 2 weeks after it is released.
8,192 tokens
gpt-4-32k
Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration.
32,768 tokens
text-davinci-003
Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports some additional features such as inserting text.
4,097 tokens
You can individually set up the model that each prompt will use.
DALL-E is an advanced artificial intelligence model developed by OpenAI, with the main purpose of generating images from descriptions provided by the user. For example, when enabling DALL-E for the AI, the user can make a request by including the "/img" command.
For instance: "/img white cat"
In response, DALL-E will create images of what was requested, specifically, images of white cats. This functionality allows the AI to produce visual representations based on specific commands, opening up diverse creative and practical possibilities for application in various fields.
Example:
It is worth noting that the generated images will remain in the chat for a certain period of time, which may expire after a few minutes or hours.
In the config.json file, you can configure the number of images that will appear in the chat, as well as the size of these images. It is important to highlight that only sizes 256x256, 512x512, and 1024x1024 are accepted.
In the chat, we use the Google Text-to-Speech function, a feature that allows text to be read through an audio button.
In the config.json file, you can change the "display_audio_button_answers" parameter to show or hide the audio button in the chat.
You can specify the language and voice for each AI by filling out the highlighted fields in the prompts-en.json file.
It is important to remember that there is a limitation on the list of available voices for each browser. For example, Google Chrome has around 20 free voices, while Edge has a more extensive list.If you want to view the list of compatible voices in each browser, simply open the console of your browser (by pressing F12) and paste the function displayVoices() in the console. This will show a list of available voices for that browser, along with their language code.
To filter the words that users will type in the chat, it is possible to use the available badwords system. To enable this feature, it is necessary to modify the "filter_badwords" option to true in the config.json file.
Additionally, it is necessary to configure the offensive words in the badwords.json file, separating them by comma, following the current model.
The filter will be activated after the user types and sends a word. If the word is deemed inappropriate according to the badwords settings in the badwords.json file, an error message will be displayed. You can also customize the text of this message in the lang.json file.
It is possible to translate the entire project structure, such as button and alert text, by editing the lang.json file located in the json folder. To do this, access the json/lang.json file.
In the lang.json file, you can translate the project structure.
By default, we already have three languages configured, and each one uses a code that can be defined in the "use_lang_index" parameter.
use_lang_index:0
The project will be translated to English
use_lang_index:1
The project will be translated to Brazilian Portuguese
use_lang_index:2
The project will be translated to Spanish
For prompts, you can manually translate the prompts-en.json file to a language other than English, Portuguese, or Spanish.
If you want to use another language for the prompt, modify the variable below in the js/config.js file.
We have provided the cards in SVG format at the link below. You can access them through the free software Figma and make the necessary edits.