OpenAI Five Finals
We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13.
Log in to bookmark articles and create collections
AI-Powered Learning bringing you YOUR best news
We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13.
Log in to bookmark articles and create collections
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models. Generation in EBMs spends more compute to continually refine its answers and doing so can generate samples competitive with GANs ...
Log in to bookmark articles and create collections
Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statistics, economics, quantum physics, and business innovation.
Log in to bookmark articles and create collections
We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neurons can represent. As AI systems are deployed in increasingly sensitive contexts, having a better understanding of their internal decision-making processes will...
Log in to bookmark articles and create collections
On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI.
Log in to bookmark articles and create collections
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rational...
Log in to bookmark articles and create collections
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task...
Log in to bookmark articles and create collections
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month apprenticeship.
Log in to bookmark articles and create collections
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one...
Log in to bookmark articles and create collections
We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situations and has already helped clarify a longstanding puzzle in reinforcement learning. CoinRun strikes a desirable balance in complexity: the environment is simpler...
Log in to bookmark articles and create collections
We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between, closest, and furthest, expressed as sets of 2d points. Our model learns these concepts after only five demonstrations. We also show cross-domain transfer: we use...
Log in to bookmark articles and create collections
We’re proposing an AI safety technique called iterated amplification that lets us specify complicated behaviors and goals that are beyond human scale, by demonstrating how to decompose a task into simpler sub-tasks, rather than by providing labeled data or a reward function. Although this idea is in...
Log in to bookmark articles and create collections
We are now accepting applications for our second cohort of OpenAI Scholars, a program where we provide 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
Log in to bookmark articles and create collections
We are now accepting applications for OpenAI Fellows and Interns for 2019.
Log in to bookmark articles and create collections
Our first cohort of OpenAI Scholars has now completed the program.
Log in to bookmark articles and create collections
OpenAI Five lost two games against top Dota 2 players at The International in Vancouver this week, maintaining a good chance of winning for the first 20–35 minutes of both games.
Log in to bookmark articles and create collections
Yesterday, OpenAI Five won a best-of-three against a team of 99.95th percentile Dota players: Blitz, Cap, Fogged, Merlini, and MoonMeander—four of whom have played Dota professionally—in front of a live audience and 100,000 concurrent livestream viewers.
Log in to bookmark articles and create collections
We’ve trained a human-like robot hand to manipulate physical objects with unprecedented dexterity.
Log in to bookmark articles and create collections
Our first class of OpenAI Scholars is underway, and you can now follow along as this group of experienced software developers becomes machine learning practitioners.
Log in to bookmark articles and create collections
The OpenAI Five Benchmark match is now over!
Log in to bookmark articles and create collections
We’ve trained an agent to achieve a high score of 74,500 on Montezuma’s Revenge from a single human demonstration, better than any previously published result. Our algorithm is simple: the agent plays a sequence of games starting from carefully chosen states from the demonstration, and learns from t...
Log in to bookmark articles and create collections
Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2.
Log in to bookmark articles and create collections
We’ve obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we’re also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training. These results provide a convincing example that pairing sup...
Log in to bookmark articles and create collections
We’re now accepting applications for the next cohort of OpenAI Fellows, a program which offers a compensated 6-month apprenticeship in AI research at OpenAI.
Log in to bookmark articles and create collections
We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore’s Law had a 2-year doubling period)[^footnote-correction]. Since 2012, this metric has grown by more...
Log in to bookmark articles and create collections
We’re proposing an AI safety technique which trains agents to debate topics with one another, using a human to judge who wins.
Log in to bookmark articles and create collections
We’re releasing an experimental metalearning approach called Evolved Policy Gradients, a method that evolves the loss function of learning agents, which can enable fast training on novel tasks. Agents trained with EPG can succeed at basic tasks at test time that were outside their training regime, l...
Log in to bookmark articles and create collections
On March 3rd, we hosted our first hackathon with 100 members of the artificial intelligence community.
Log in to bookmark articles and create collections
We’re providing 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
Log in to bookmark articles and create collections