OpenAI

Sunday 24th April 2016

Neural Networks: Why AI Development is Going to Get Even Faster

The pace of development of artificial intelligence is going to get faster. And not for the typical reasons — More money, interest from megacompanies, faster computers, cheap & huge data, and so on. Now it’s about to accelerate because other fields are starting to mesh with it, letting insights from one feed into the other, and vice versa.

Neural networks are drawing sustained attention from researchers across the academic spectrum.  “Pretty much any researcher who has been to the NIPS Conference [a big AI conference] is beginning to evaluate neural networks for their application,” says Reza Zadeh, a consulting professor at Stanford. That’s going to have a number of weird effects.

People like neural networks because they basically let you chop out a bunch of hand-written code in favor of feeding inputs and outputs into neural nets and getting computers to come up with the stuff in-between. In technical terms, they infer functions.

Robotics has just started to get into neural networks. This has already sped up development. This year, Google demonstrated a system that teaches robotic arms to learn how to pick up objects of any size and shape. That work was driven by research conducted last year at Pieter Abbeel’s lab in Berkeley, which saw scientists combine two neural network-based techniques (reinforcement learning and deep learning) with robotics to create machines that could learn faster.

More distant communities have already adapted the technology to their own needs. Brendan Frey runs a company called Deep Genomics, which uses machine learning to analyze the genome. Part of the motivation for that is that humans are “very bad” at interpreting the genome, he says. Modern machine learning approaches give us a way to get computers to analyze this type of mind-bending data for us. “We must turn to truly superhuman artificial intelligence to overcome our limitations,” he says.

One of the reasons why so many academics from so many different disciplines are getting involved is that deep learning, though complex, is surprisingly adaptable. “Everybody who tries something seems to get things to work beyond what they expected,” says Pieter Abbeel. “Usually it’s the other way around.”

Oriol Vinyals, who came up with some of the technology that sits inside Google Inbox’s ‘Smart Reply‘ feature, developed a neural network-based algorithm to plot the shortest routes between various points on a map. “In a rather magical moment, we realized it worked,” he says. This generality not only encourages more experimentation but speeds up the development loop as well.

One challenge: though neural networks generalize very well, we still lack a decent theory to describe them, so much of the field proceeds by intuition. This is both cool and extremely bad. “It’s amazing to me that these very vague, intuitive arguments turned out to correspond to what is actually happening,” says Ilya Sutskever, research director at OpenAI., of the move to create ever-deeper neural network architectures. Work needs to be done here. “Theory often follows experiment in machine learning,” says Yoshua Bengio, one of the founders of the field.

My personal intuition is that deep learning is going to make its way into an ever-expanding number of domains. Given sufficiently large datasets, powerful computers, and the interest of subject-area experts, the Deep Learning tsunami, looks set to wash over an ever-larger number of disciplines. – Jack Clark

 

Wednesday 14th December 2016

OpenAI Releases Universe, an Open Source Platform for Training AI

Related image

  • If Universe works, computers will use video games to learn how to function in the physical world. – Wired
  • Universe is the most intricate and intriguing tech I’ve worked on. – Greg Brockman

OpenAI, the billion-dollar San Francisco artificial intelligence lab backed by Tesla CEO Elon Musk, just unveiled a new virtual world.

This isn’t a digital playground for humans. It’s a school for artificial intelligence. It’s a place where AI can learn to do just about anything.

Other AI labs have built similar worlds where AI agents can learn on their own. Researchers at the University of Alberta offer the Atari Learning Environment, where agents can learn to play old Atari games like Breakout and Space Invaders.

Microsoft offers Malmo, based on the game Minecraft. And earlier this month, Google’s DeepMind released an environment called DeepMind Lab.

But Universe is bigger than any of these. It’s an AI training ground that spans any software running on any machine, from games to web browsers to protein folders.

“The domain we chose is everything that a human can do with a computer,” says Greg Brockman, OpenAI’s chief technology officer.

In theory, AI researchers can plug any application into Universe, which then provides a common way for AI “agents” to interact with these applications. That means researchers can build bots that learn to navigate one application and then another and then another. – Cade Metz

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: