Report from the OpenAI hackathon
On March 3rd, we hosted our first hackathon with 100 members of the artificial intelligence community.
Log in to bookmark articles and create collections
AI-Powered Learning bringing you YOUR best news
On March 3rd, we hosted our first hackathon with 100 members of the artificial intelligence community.
Log in to bookmark articles and create collections
We’re providing 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
Log in to bookmark articles and create collections
We’re releasing eight simulated robotics environments and a Baselines implementation of Hindsight Experience Replay, all developed for our research over the past year. We’ve used these environments to train models which work on physical robots. We’re also releasing a set of requests for robotics res...
Log in to bookmark articles and create collections
Come to OpenAI’s office in San Francisco’s Mission District for talks and a hackathon on Saturday, March 3rd.
Log in to bookmark articles and create collections
We’ve co-authored a paper that forecasts how malicious actors could misuse AI technology, and potential ways we can prevent and mitigate these threats. This paper is the outcome of almost a year of sustained work with our colleagues at the Future of Humanity Institute, the Centre for the Study of Ex...
Log in to bookmark articles and create collections
We’re excited to welcome new donors to OpenAI.
Log in to bookmark articles and create collections
We’ve designed a method that encourages AIs to teach each other with examples that also make sense to humans. Our approach automatically selects the most informative examples to teach a concept—for instance, the best images to describe the concept of dogs—and experimentally we found our approach to...
Log in to bookmark articles and create collections
We’re releasing a new batch of seven unsolved problems which have come up in the course of our research at OpenAI.
Log in to bookmark articles and create collections
We’re releasing highly-optimized GPU kernels for an underexplored class of neural network architectures: networks with block-sparse weights. Depending on the chosen sparsity, these kernels can run orders of magnitude faster than cuBLAS or cuSPARSE. We’ve used them to attain state-of-the-art results...
Log in to bookmark articles and create collections
Our latest robotics techniques allow robot controllers, trained entirely in simulation and deployed on physical robots, to react to unplanned changes in the environment as they solve simple tasks. That is, we’ve used these techniques to build closed-loop systems rather than open-loop ones as before.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
We’ve found that self-play allows simulated AIs to discover physical skills like tackling, ducking, faking, kicking, catching, and diving for the ball, without explicitly designing an environment with these skills in mind. Self-play ensures that the environment is always the right difficulty for an...
Log in to bookmark articles and create collections
We’re releasing two new OpenAI Baselines implementations: ACKTR and A2C. A2C is a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C) which we’ve found gives equal performance. ACKTR is a more sample-efficient reinforcement learning algorithm than TRPO and A2C, and requir...
Log in to bookmark articles and create collections
We’ve created a bot which beats the world’s top professionals at 1v1 matches of Dota 2 under standard tournament rules. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. This is a step towards building AI systems which accomplish well-defined goa...
Log in to bookmark articles and create collections
RL-Teacher is an open-source implementation of our interface to train AIs via occasional human feedback rather than hand-crafted reward functions. The underlying technique was developed as a step towards safe AI systems, but also applies to reinforcement learning problems with rewards that are hard...
Log in to bookmark articles and create collections
We’re releasing a new class of reinforcement learning algorithms, Proximal Policy Optimization (PPO), which perform comparably or better than state-of-the-art approaches while being much simpler to implement and tune. PPO has become the default reinforcement learning algorithm at OpenAI because of i...
Log in to bookmark articles and create collections
We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.
Log in to bookmark articles and create collections
One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve develop...
Log in to bookmark articles and create collections
Multiagent environments where agents compete for resources are stepping stones on the path to AGI. Multiagent environments have two useful properties: first, there is a natural curriculum—the difficulty of the environment is determined by the skill of your competitors (and if you’re competing agains...
Log in to bookmark articles and create collections
We’re open-sourcing OpenAI Baselines, our internal effort to reproduce reinforcement learning algorithms with performance on par with published results. We’ll release the algorithms over upcoming months; today’s release includes DQN and three of its variants.
Log in to bookmark articles and create collections
We’ve created a robotics system, trained entirely in simulation and deployed on a physical robot, which can learn a new task after seeing it done once.
Log in to bookmark articles and create collections
We are releasing Roboschool: open-source software for robot simulation, integrated with OpenAI Gym.
Log in to bookmark articles and create collections
We’ve developed an unsupervised system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews.
Log in to bookmark articles and create collections
We’ve created the world’s first Spam-detecting AI trained entirely in simulation and deployed on a physical robot.
Log in to bookmark articles and create collections
In this post we’ll outline new OpenAI research in which agents develop their own language.
Log in to bookmark articles and create collections
The OpenAI team is now 45 people. Together, we’re pushing the frontier of AI capabilities—whether by validating novel ideas, creating new software systems, or deploying machine learning on robots.
Log in to bookmark articles and create collections
Reinforcement learning algorithms can break in surprising, counterintuitive ways. In this post we’ll explore one failure mode, which is where you misspecify your reward function.
Log in to bookmark articles and create collections
We’re working with Microsoft to start running most of our large-scale experiments on Azure.
Log in to bookmark articles and create collections