Skip to main content

OpenAI brings fine-tuning to GPT-4o with one million training tokens through September 23rd

  • 22 August 2024
  • 0 replies
  • 29 views

OpenAI announced the launch of fine-tuning for GPT-4o, addressing one of the most frequently requested features from developers. This new capability allows developers to fine-tune the GPT-4o model with custom datasets, enhancing performance while reducing costs for specific applications.

To support the rollout, OpenAI is offering 1 million free training tokens per day for every organization through September 23. This incentive is designed to help developers explore fine-tuning without immediate cost concerns.

Developers can begin fine-tuning by visiting the fine-tuning dashboard. The process involves:

  1. Accessing the dashboard.
  2. Clicking "Create" to start a new project.
  3. Selecting gpt-4o-2024-08-06 from the base model options.

GPT-4o Mini Also Available

OpenAI also offers GPT-4o Mini for fine-tuning. This smaller model is available with 2 million free training tokens per day until September 23. To use it, developers should select gpt-4o-mini-2024-07-18 on the fine-tuning dashboard.

Data Privacy and Safety

  • Control: Fine-tuned models remain under the developer's control, with full ownership of business data, including all inputs and outputs.
  • Safety: Layered safety mitigations are in place, including automated safety evaluations and usage monitoring to ensure compliance with usage policies.

What is Fine-tuning?

  • Fine-tuning in AI refers to the process of taking a pre-trained model and further training it on a new, often smaller, dataset that is specific to a particular task or domain. This allows the model to adapt to new data and improve its performance on specific tasks without needing to be trained from scratch.

 

Be the first to reply!

Reply