OpenAI hackathon
Come to OpenAIβs office in San Franciscoβs Mission District for talks and a hackathon on Saturday, March 3rd.
Log in to bookmark articles and create collections
AI-Powered Learning bringing you YOUR best news
Come to OpenAIβs office in San Franciscoβs Mission District for talks and a hackathon on Saturday, March 3rd.
Log in to bookmark articles and create collections
Weβre excited to welcome new donors to OpenAI.
Log in to bookmark articles and create collections
Weβve co-authored a paper that forecasts how malicious actors could misuse AI technology, and potential ways we can prevent and mitigate these threats. This paper is the outcome of almost a year of sustained work with our colleagues at the Future of Humanity Institute, the Centre for the Study of Ex...
Log in to bookmark articles and create collections
Weβve designed a method that encourages AIs to teach each other with examples that also make sense to humans. Our approach automatically selects the most informative examples to teach a conceptβfor instance, the best images to describe the concept of dogsβand experimentally we found our approach to...
Log in to bookmark articles and create collections
Weβve built a system for automatically figuring out which object is meant by a word by having a neural network decide if the word belongs to each of about 100 automatically-discovered βtypesβ (non-exclusive categories).
Log in to bookmark articles and create collections
Weβre releasing a new batch ofΒ seven unsolved problemsΒ which have come up in the course of our research at OpenAI.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβre releasing highly-optimized GPU kernels for an underexplored class of neural network architectures: networks with block-sparse weights. Depending on the chosen sparsity, these kernels can run orders of magnitude faster than cuBLAS or cuSPARSE. Weβve used them to attain state-of-the-art results...
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβve developed a hierarchical reinforcement learning algorithm that learns high-level actions useful for solving a range of tasks, allowing fast solving of tasks requiring thousands of timesteps. Our algorithm, when applied to a set of navigation problems, discovers a set of high-level actions for...
Log in to bookmark articles and create collections
Our latest robotics techniques allow robot controllers, trained entirely in simulation and deployed on physical robots, to react to unplanned changes in the environment as they solve simple tasks. That is, weβve used these techniques to build closed-loop systems rather than open-loop ones as before.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβve found that self-play allows simulated AIs to discover physical skills like tackling, ducking, faking, kicking, catching, and diving for the ball, without explicitly designing an environment with these skills in mind. Self-play ensures that the environment is always the right difficulty for an...
Log in to bookmark articles and create collections
We show that for the task of simulated robot wrestling, a meta-learning agent can learn to quickly defeat a stronger non-meta-learning agent, and also show that the meta-learning agent can adapt to physical malfunction.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβre releasing an algorithm which accounts for the fact that other agents are learning too, and discovers self-interested yet collaborative strategies like tit-for-tat in the iterated prisonerβs dilemma. This algorithm, Learning with Opponent-Learning Awareness (LOLA), is a small step towards agent...
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβre releasing two new OpenAI Baselines implementations: ACKTR and A2C. A2C is a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C) which weβve found gives equal performance. ACKTR is a more sample-efficient reinforcement learning algorithm than TRPO and A2C, and requir...
Log in to bookmark articles and create collections
Our Dota 2 result shows that self-play can catapult the performance of machine learning systems from far below human level to superhuman, given sufficient compute. In the span of a month, our system went from barely matching a high-ranked player to beating the top pros and has continued to improve s...
Log in to bookmark articles and create collections
Weβve created a bot which beats the worldβs top professionals at 1v1 matches of Dota 2 under standard tournament rules. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. This is a step towards building AI systems which accomplish well-defined goa...
Log in to bookmark articles and create collections
RL-Teacher is an open-source implementation of our interface to train AIs via occasional human feedback rather than hand-crafted reward functions. The underlying technique was developed as a step towards safe AI systems, but also applies to reinforcement learning problems with rewards that are hard...
Log in to bookmark articles and create collections
Weβve found that adding adaptive noise to the parameters of reinforcement learning algorithms frequently boosts performance. This exploration method is simple to implement and very rarely decreases performance, so itβs worth trying on any problem.
Log in to bookmark articles and create collections
Weβre releasing a new class of reinforcement learning algorithms, Proximal Policy Optimization (PPO), which perform comparably or better than state-of-the-art approaches while being much simpler to implement and tune. PPO has become the default reinforcement learning algorithm at OpenAI because of i...
Log in to bookmark articles and create collections
Weβve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβre open-sourcing a high-performance Python library for robotic simulation using the MuJoCo engine, developed over our past year of robotics research.
Log in to bookmark articles and create collections