Learning sparse neural networks through Lβ regularization
Log in to bookmark articles and create collections
AI-Powered Learning bringing you YOUR best news
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβve developed a hierarchical reinforcement learning algorithm that learns high-level actions useful for solving a range of tasks, allowing fast solving of tasks requiring thousands of timesteps. Our algorithm, when applied to a set of navigation problems, discovers a set of high-level actions for...
Log in to bookmark articles and create collections
Our latest robotics techniques allow robot controllers, trained entirely in simulation and deployed on physical robots, to react to unplanned changes in the environment as they solve simple tasks. That is, weβve used these techniques to build closed-loop systems rather than open-loop ones as before.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβve found that self-play allows simulated AIs to discover physical skills like tackling, ducking, faking, kicking, catching, and diving for the ball, without explicitly designing an environment with these skills in mind. Self-play ensures that the environment is always the right difficulty for an...
Log in to bookmark articles and create collections
We show that for the task of simulated robot wrestling, a meta-learning agent can learn to quickly defeat a stronger non-meta-learning agent, and also show that the meta-learning agent can adapt to physical malfunction.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβre releasing an algorithm which accounts for the fact that other agents are learning too, and discovers self-interested yet collaborative strategies like tit-for-tat in the iterated prisonerβs dilemma. This algorithm, Learning with Opponent-Learning Awareness (LOLA), is a small step towards agent...
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβre releasing two new OpenAI Baselines implementations: ACKTR and A2C. A2C is a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C) which weβve found gives equal performance. ACKTR is a more sample-efficient reinforcement learning algorithm than TRPO and A2C, and requir...
Log in to bookmark articles and create collections
Our Dota 2 result shows that self-play can catapult the performance of machine learning systems from far below human level to superhuman, given sufficient compute. In the span of a month, our system went from barely matching a high-ranked player to beating the top pros and has continued to improve s...
Log in to bookmark articles and create collections
Weβve created a bot which beats the worldβs top professionals at 1v1 matches of Dota 2 under standard tournament rules. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. This is a step towards building AI systems which accomplish well-defined goa...
Log in to bookmark articles and create collections
RL-Teacher is an open-source implementation of our interface to train AIs via occasional human feedback rather than hand-crafted reward functions. The underlying technique was developed as a step towards safe AI systems, but also applies to reinforcement learning problems with rewards that are hard...
Log in to bookmark articles and create collections
Weβve found that adding adaptive noise to the parameters of reinforcement learning algorithms frequently boosts performance. This exploration method is simple to implement and very rarely decreases performance, so itβs worth trying on any problem.
Log in to bookmark articles and create collections
Weβre releasing a new class of reinforcement learning algorithms, Proximal Policy Optimization (PPO), which perform comparably or better than state-of-the-art approaches while being much simpler to implement and tune. PPO has become the default reinforcement learning algorithm at OpenAI because of i...
Log in to bookmark articles and create collections
Weβve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβre open-sourcing a high-performance Python library for robotic simulation using the MuJoCo engine, developed over our past year of robotics research.
Log in to bookmark articles and create collections
One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMindβs safety team, weβve develop...
Log in to bookmark articles and create collections
Multiagent environments where agents compete for resources are stepping stones on the path to AGI. Multiagent environments have two useful properties: first, there is a natural curriculumβthe difficulty of the environment is determined by the skill of your competitors (and if youβre competing agains...
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Weβre open-sourcing OpenAI Baselines, our internal effort to reproduce reinforcement learning algorithms with performance on par with published results. Weβll release the algorithms over upcoming months; todayβs release includes DQN and three of its variants.
Log in to bookmark articles and create collections
Weβve created a robotics system, trained entirely in simulation and deployed on a physical robot, which can learn a new task after seeing it done once.
Log in to bookmark articles and create collections
We are releasing Roboschool: open-source software for robot simulation, integrated with OpenAI Gym.
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections
Log in to bookmark articles and create collections