Lessons learned on language model safety and misuse
We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models.
Log in to bookmark articles and create collections
AI-Powered Learning bringing you YOUR best news
We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models.
Log in to bookmark articles and create collections
Call for expressions of interest to study the economic impacts of large language models.
Log in to bookmark articles and create collections
We built a neural theorem prover for Lean that learned to solve a variety of challenging high-school olympiad problems, including problems from the AMC12 and AIME competitions, as well as two problems adapted from the IMO.
Log in to bookmark articles and create collections
We’ve trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic, using techniques developed through our alignment research. These InstructGPT models, which are trained with humans in the loop, are now deployed as the defa...
Log in to bookmark articles and create collections
We are introducing embeddings, a new endpoint in the OpenAI API that makes it easy to perform natural language and code tasks like semantic search, clustering, topic modeling, and classification.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
We’ve fine-tuned GPT-3 to more accurately answer open-ended questions using a text-based web browser.
Log in to bookmark articles and create collections
Fine-tune with a single command.
Log in to bookmark articles and create collections
As part of our effort to support and develop AI talent, we’re excited to announce the OpenAI Residency.
Log in to bookmark articles and create collections
Wider availability made possible by safety progress.
Log in to bookmark articles and create collections
We’ve trained a system that solves grade school math problems with nearly twice the accuracy of a fine-tuned GPT-3 model. It solves about 90% as many problems as real kids: a small sample of 9-12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems.
Log in to bookmark articles and create collections
Scaling human oversight of AI systems for tasks that are difficult to evaluate.
Log in to bookmark articles and create collections
Today, we’re excited to announce the appointment of Helen Toner to our board of directors.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
We’ve created an improved version of OpenAI Codex, our AI system that translates natural language to code, and we are releasing it through our API in private beta starting today.
Log in to bookmark articles and create collections
We’re releasing Triton 1.0, an open-source Python-like programming language which enables researchers with no CUDA experience to write highly efficient GPU code—most of the time on par with what an expert would be able to produce.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Our latest research finds we can improve language model behavior with respect to specific behavioral values by fine-tuning on a small, curated dataset.
Log in to bookmark articles and create collections
We’re proud to announce that the 2021 class of OpenAI Scholars has completed our six-month mentorship program and have produced an open-source research project with stipends and support from OpenAI.
Log in to bookmark articles and create collections
OpenAI is committed to developing general-purpose artificial intelligence that benefits all humanity, and we believe that achieving our goal requires expertise in public policy as well as technology. So, we’re delighted to announce that Congressman Will Hurd has joined our board of directors.
Log in to bookmark articles and create collections
Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.
Log in to bookmark articles and create collections
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that...
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
We’ve scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling Laws for Neural Language Models.
Log in to bookmark articles and create collections
We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
Log in to bookmark articles and create collections
We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the “zero-shot” capabilities of GP...
Log in to bookmark articles and create collections
It’s been a year of dramatic change and growth at OpenAI.
Log in to bookmark articles and create collections
OpenAI has agreed to license GPT-3 to Microsoft for their own products and services.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
We’ve applied reinforcement learning from human feedback to train language models that are better at summarization.
Log in to bookmark articles and create collections