OpenAI introduces function calling capabilities and lower prices for its GPT models.
Developers can now describe functions to GPT-4 and GPT-3.5 Turbo, allowing for more reliable integration with external tools and APIs.
The updated models offer improved steerability, extended context length, and reduced pricing.
These updates aim to enhance the usability, affordability, and versatility of OpenAI's language models.
How It Works
Function Calling: Developers can describe functions to GPT-4 and GPT-3.5 Turbo, receiving structured data as output.
Steerability: The updated models provide better control over the model's responses.
Extended Context: GPT-3.5 Turbo now supports a 16k context length, enabling better comprehension of larger texts.
Lower Prices: OpenAI has reduced the cost of its state-of-the-art embeddings model and input tokens for GPT-3.5 Turbo, making them more accessible to developers.
My Thoughts
OpenAI's updates enhance functionality, affordability, and developer experience.
Function calling empowers developers to create intelligent applications with seamless external tool integration.
Cost reductions promote wider adoption and innovation.
Safety measures are crucial when integrating untrusted data from external tools.