Google AI

Monday 18th November 2013

Google

Larry Page’s benchmark for success: moon shots and 10x ideas. Google’s founders have no interest in giving dividends or doing stock buybacks. 200 years from now, few will know that Google started as a search engine. They’ll probably think of it as the company that cured cancer, made the internet free for humanity and built self-driving cars.

Conquering the world, learning to fly like birds, reaching the moon, creating the bomb and inventing the United States will all pale in the shadow of just a small percentage completion of Google’s current plans. – Jason Calacanis

1. Free Internet Everywhere for Every Human for Life

2. Data, Machine Learning & Quantum Computing

3. Wearable & Implantable Computing

4. Venture Capital, Funding & Currency

5. Media

6. Life Extension

7. Alternative Energy & Nuclear

8. Transportation, Driverless Cars

 

Monday 20th January 2014

Google

Google is testing a smart contact lens that’s built to measure glucose levels in tears http://googleblog.blogspot.mx/2014/01/introducing-our-smart-contact-lens.html

Jose Galvez

 

Monday 10th February 2014

Robot Stocks

Revenue for the global industrial robotics market is expected to cross $37 billion by 2018. The $37 billion market for industrial robotics by 2018 may sound insignificant next to Bill Gates prediction of a robot in every home and a $1 trillion global business by 2025. This is where Google’s acquisition of Nest and other robotic manufacturers may earn a big slice of the market, together with the Roomba from iRobots and other manufacturers.

The potential market for Robots is starting to whet the appetite of investors, consider Adept Technology and iRobot. Stocks in Adept are up 498.42% over the last 5 years. iRobot stocks are up 386.75% — compare these to the Nasdaq composite which is up 154.88% and The Dow Jones which is up 88.73% over the same period. – Colin Lewis

 

Monday 24th February 2014

Google

Google has gone on an unprecedented shopping spree and is in the throes of assembling what looks like the greatest artificial intelligence laboratory on Earth; a laboratory designed to feast upon a resource of a kind that the world has never seen before: truly massive data. Our data. From the minutiae of our lives.

Google has bought almost every machine-learning and robotics company it can find, or at least, rates.

It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an “undisclosed” but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for £242m.

And those are just the big deals. It also bought Bot & Dolly, Meka Robotics, Holomni, Redwood Robotics and Schaft, and another AI startup, DNNresearch. It hired Geoff Hinton, a British computer scientist who’s probably the world’s leading expert on neural networks. And it has embarked upon what one DeepMind investor told the technology publication Re/code two weeks ago was “a Manhattan project of AI”.

If artificial intelligence was really possible, and if anybody could do it, he said, “this will be the team”. The future, in ways we can’t even begin to imagine, will be Google’s. – Carole Cadwalladr

 

Monday 24th February 2014

Ray Kurzweil’s Job at Google

“I thought about if I had all the money in the world, what would I want to do?” he says. “And I would want to do this. This project. This is not a new interest for me. This idea goes back 50 years. I’ve been thinking about artificial intelligence and how the brain works for 50 years.”

Kurzweil’s job description consists of a one-line brief. “I don’t have a 20-page packet of instructions,” he says. “I have a one-sentence spec. Which is to help bring natural language understanding to Google. And how they do that is up to me.”

“My project is ultimately to base search on really understanding what the language means,” he said. “When you write an article, you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information.

“The message in your article is information, and the computers are not picking up on that. So we would want them to read everything on the web and every page of every book, then be able to engage in intelligent dialogue with the user to be able to answer their questions.” – Carole Cadwalladr

 

Wednesday 16th April 2014

Google

If even a small part of what Silicon Valley currently believes about Google (GOOG) is true, it’s significantly undervalued today.

The company everyone believes is the General Electric of the 21st century: 2014 P/E ex-cash of 19.4 – Marc Andreessen, Andreessen Horowitz

 

Friday 25th April 2014

Google Glass Without the Glass.

Google patents smart contact lens with a camera built in. The firm has already developed a smart lens capable of measuring the glucose level of diabetics. – Mark Prigg

 

Friday 25th April 2014

The Biggest Event in Human History

Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

There are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. – Stephen Hawking

 

Saturday 3rd May 2014

Google

Larry Page on Google+ #SelfDriving cars “We’re going to solve city streets!”  – RobotEnomics

 

Monday 19th May 2014

Car Makers

In a recent blog post, Singularity University’s Brad Templeton (and consultant to the Google car team) said, “The past history of high-tech disruptions shows that very few of the incumbent leaders make it through to the other side. If I were one of the car makers who doesn’t even have a serious project on this, I would be very afraid right now.”

Templeton further notes that though Mercedes is furthest along of the major carmakers, its systems are still less capable than the Google car—in 2010 – Jason Dorrier

 

Monday 19th May 2014

5 Areas in Robotics that will Transform Society and Their Economic Impact

1- Drones: The next 5 years for drones is very promising. Expect to see drones becoming part of society’s information infrastructure as News agencies, TV companies, photographers, real estate agents, moviemakers, industrial giants, pizza deliveries, logistic companies, local governments, agriculture and others embrace drone technology

2 – Medical Prodcedures & Operations: IBM’s Watson may become the best diagnostician in the world and be greatly in demand contributing billions to IBM’s sales whilst potentially saving millions of lives. The global medical robotic systems market was worth $5.48 billion in 2011 and is expected to reach $13.6 billion in 2018, growing at a compounded annual growth rate of 12.6% from 2012. Surgical robots are expected to enjoy the largest revenue share.

3 – Robotic Prosthetics & Exoskeletons: The economic market is currently quite small, somewhere around $100 to $150 million, however with the recent advances of prosthetics and exoskeletons it is expected to grow considerably to over $1.5 billion in the next 3 to 5 years and higher still thereafter

4 – Artificial Assistants: This domain has the largest possible early impact on the largest number of people. Artificial Intelligence pioneers such as Google Director of Engineering Ray Kurzweill have indicated anyone with a smartphone or tablet will be using ‘cognitive assistants’ by 2017

5 – Driverless Cars: Autonomous vehicles, including the iconic Google self-driving cars, will be on the road commercially before 2018. The long-term impact on society of self-driving cars and other autonomous vehicles will be a radical change in how we commute. There will also likely be a sharp reduction in traffic accidents, the majority of which are caused by human error

Colin Lewis

 

Monday 2nd June 2014

Microsoft

Skype video phone call at codecon with real time voice translation German to English. Awesome. Future. Well done.

Skype voice language translation product will be launched this year – Mark Suster, Upfront Ventures

 

I can’t help but be reminded of Google no longer understanding how its systems are learning to identify objects in photos so accurately – the technology is hugely impressive and, in developing a mind of its own, kind of disturbing – David Meyer

 

Wednesday 20th August 2014

The Quest to Build an Artificial Brain

Deep learning has suddenly spread across the commercial tech world, from Google to Microsoft to Baidu to Twitter, just a few years after most AI researchers openly scoffed at it.

All of these tech companies are now exploring a particular type of deep learning called convolutional neural networks, aiming to build web services that can do things like automatically understand natural language and recognize images. At Google, “convnets” power the voice recognition system available on Android phones. At China’s Baidu, they drive a new visual search engine.

But this is just a start. The deep learning community are working to improve the technology. Today’s most widely used convolutional neural nets rely almost exclusively on supervised learning. Basically, that means that if you want it to learn how to identify a particular object, you have to label more than a few examples. Yet unsupervised learning—or learning from unlabeled data—is closer to how real brains learn, and some deep learning research is exploring this area.

“How this is done in the brain is pretty much completely unknown. Synapses adjust themselves, but we don’t have a clear picture for what the algorithm of the cortex is,” says LeCun. “We know the ultimate answer is unsupervised learning, but we don’t have the answer yet.” – Daniela Hernandez

 

Tuesday 25th November 2014

Google

Around 2002 I attended a small party for Google—before its IPO, when it only focused on search. I struck up a conversation with Larry Page, Google’s brilliant cofounder, who became the company’s CEO in 2011. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?”

My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad-auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site who thought it would not last long. But Page’s reply has always stuck with me: “Oh, we’re really making an AI.”

I’ve thought a lot about that conversation over the past few years as Google has bought 14 AI and robotics companies. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search contributes 80 percent of its revenue. But I think that’s backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI.

When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter bunny looks like. Each of the 12.1 billion queries that Google’s 1.2 billion searchers conduct each day tutor the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousand-fold more data and 100 times more computing resources, Google will have an unrivaled AI. My prediction: By 2024, Google’s main product will not be search but AIKevin Kelly

 

Tuesday 25th November 2014

Tech’s Pace is Like a Dozen Gutenberg Moments Happening at the Same Time

Drilling down into the concepts and consequences of our exponential pace, Singularity University’s global ambassador and founding executive director, Salim Ismail, set the stage.

We’re at an inflection point, he said, where we are digitizing and augmenting the human experience with technology. That digitization is accelerating change. The question is: How can individuals and society, more generally, navigate it?

Five hundred years ago, Johannes Gutenberg’s printing press freed information as never before. Ismail framed the current pace of technology as Gutenberg to the extreme, “We’re having about a dozen Gutenberg moments all at the same time.”

Ismail showed a video of someone riding in one of Google’s self-driving cars as it navigated an obstacle course at top speed. The rider is amazed and a little nervous—the video ends with him letting out a little involuntary scream. Today, the world is letting out a little collective Google scream. – Jason Dorrier

 

Monday 29th December 2014

Google

* Google wants its self-driving car ready in five years – Joseph B. White & Rolfe Winkler

 

Monday 29th December 2014

“My take is that A.I. is taking over. A few humans might still be ‘in charge,’ but less and less so”

– Sebastian Thrun, Lead Developer of Google’s Driverless Car Project

 

Monday 29th December 2014

Skype’s Real Life Babel Fish Translates English/Spanish in Real Time

Microsoft has released its first preview of Skype Translator, which allows real-time conversations between spoken English and Spanish and will be extended to more languages.

It is now available as a free download for Windows 8.1, starting with spoken English and Spanish along with more than 40 text-based languages for instant messaging.

Gurdeep Pall, Microsoft’s corporate vice-president of Skype and Lync, said in a blog post that Skype Translator would “open up endless possibilities”, adding: “Skype Translator relies on machine learning, which means that the more the technology is used, the smarter it gets. We are starting with English and Spanish, and as more people use the Skype Translator preview with these languages, the quality will continually improve.”

Skype Translator is part of Microsoft’s artificial intelligence research relying on machine learning and deep neural networks, much like Google and Apple’s voice assistants. It can understand speech and then rapidly translate it into another language before using text-to-speech systems to speak the translation back to the user, or in this case the other party.

The more people use the preview the more data the Skype team will have to improve the translation – Samuel Gibbs

 

Sunday 18th January 2015

Artificial Intelligence

In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work. There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist – Robert Mcmillan

 

Friday 30th January 2015

Artificial Intelligence

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly.

While there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.

* Cars are full of ANI systems

* Your phone is a little ANI factory.

* Google Translate is another classic ANI

Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.

————————————————————-

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating an AGI is a much harder task than creating an ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” An AGI would be able to do all of those things as easily as you can.

————————————————————–

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.

As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything – Tim Urban

 

Wednesday 11th February 2015

Artificial Intelligence

Behind much of the proliferation of AI startups are large companies such as Google, Microsoft Corp., and Amazon, which have quietly built up AI capabilities over the past decade to handle enormous sets of data and make predictions, like which ad someone is more likely to click on. Starting in the mid-2000s, the companies resurrected AI techniques developed in the 1980s, paired them with powerful computers and started making money.

Their efforts have resulted in products like Apple’s chirpy assistant Siri and Google’s self-driving cars. It has also spurred deal-making, with Facebook acquiring voice-recognition AI startup Wit.ai last month and Google buying DeepMind Technologies Ltd. in January 2014.

For Google, “the biggest thing will be artificial intelligence,” Chairman Eric Schmidt said last year in an interview with Bloomberg Television’s Emily Chang.

The AI boom has also been stoked by universities, which have noticed the commercial success of AI at places like Google and taken advantage of falling hardware costs to do more research and collaborate with closely held companies.

Last November, the University of California at San Francisco began working with Palo Alto, California-based MetaMind on two projects: one to spot prostate cancer and the other to predict what may happen to a patient after reaching a hospital’s intensive care unit so that staff can more quickly tailor their approach to the person – Jack Clark

 

Monday 6th April 2015

Tesla Cars Will Start Self Driving This Summer

All Teslas will get an over-the-air update this summer, probably around June, allowing them to drive in “Autopilot” mode.

Musk did confirm that the Autopilot mode would be “technically capable of driving from parking lot to parking lot.” The car will also be allowed to drive itself when you summon it, and when you’re parking it in your garage.

It seems Autopilot will be disabled when you’re not doing freeway driving, which is by far the easiest aspect of autonomous vehicle activity.

Just to be clear, we’re not talking about some far-off future Tesla. We’re not talking about Google driverless car prototypes or government road tests. This is a car you can buy today, which will be given the ability to drive itself in a few months via the same setup that updates your iPhone. Automated automobiles, automatically activated – Chris Taylor

 

Monday 6th April 2015

Google’s Driverless Car Project

The head of its secretive Google X labs, Astro Teller, casually dropped in to his South by Southwest talk an intriguing nugget: the company could have offered freeway-driving driverless cars as much as two years ago, but preferred to build a vehicle from the ground up that could handle all driving — and didn’t have so much as a steering wheel.

“We could have taken a much easier path than the one we’ve chosen,” Teller told a packed crowd in an Austin ballroom.

“Two years ago we had a perfectly good freeway commute helper. Freeway driving was easy for our cars at that point. You stay in your lane, change lanes occasionally, and don’t hit the guy in front of you — there’s the occasional poor driver who makes things a little interesting, but the car had basically mastered freeways.”

And how about surface streets? We know the biggest problem is trying to figure out those unpredictable pedestrians who might step out into traffic at any moment, or do one of a hundred crazy things; we’ve long assumed that’s the sort of situation that only a human brain can handle. But according to Teller, Google X is mastering the most complex pedestrian situations, too – Chris Taylor

 

Monday 6th April 2015

The “Babel Fish” Universal Translator

Wired (Dec 2014) wrote: “Microsoft is already using some of the text translation technology underpinning Skype Translate to power its Bing Translate search engine translation service, and to jump start the foreign language translation of its products, manuals, and hundreds of thousands of support documents.”http://www.wired.com/2014/12/skype-used-ai-build-amazing-new-language-translator/

Live Science wrote (21 Mar 2015): “Ongoing research could eventually power machine translators that rival the fluidity of sci-fi translators, Google researcher Geoffrey Hinton suggested in a Reddit AMA— he likened the possibilities to those of the “Babel Fish” universal translator in Douglas Adam’s “Hitchhiker’s Guide to the Galaxy.” (In the book, the Babel Fish is a small leechlike fish inserted into the ear that provides instant, universal translation.)” http://www.livescience.com/50216-star-wars-artificial-intelligence-universal-translator.html

GigaOm (29 Jan 2015) quoted Geoffrey Hinton: “In few years time we will put it on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.” https://gigaom.com/2015/01/29/how-ai-can-help-build-a-universal-real-time-translator/

Singularity 2045

 

Wednesday 6th May 2015

‘Supercharged’ Genomics: 100 Years of Breakthroughs Possible in 10 Years

A “supercharged” approach to human genome research could see as many health breakthroughs made in the next decade as in the previous century, says Brad Perkins, chief medical offer at Human Longevity Inc.

“I don’t have a pill” to boost human lifespan, Perkins admitted on stage at WIRED Health 2015. But he has perhaps the next best thing — data, and the means to make sense of it. Based in San Diego, Human Longevity is fixed on using genome data and analytics to develop new ways to fight age-related diseases.

Perkins says the opportunity for humanity — and Human Longevity — is the result of the convergence of four trends:

1) The reduction in the cost of genome sequencing (from $100m per genome in 2000, to just over $1,000 in 2014)

2) The vast improvement in computational power

3) The development of large-scale machine learning techniques

4) The wider movement of health care systems towards ‘value-based’ models.

Together these trends are making it easier than ever to analyse human genomes at scale.

“Our focus is not being a fee for service sequencing operation,” Perkins says. It is to “fully understand and fully interpret all the meaning in the human genome”. To do that Human Longevity Inc is building machine learning systems which can act as a ‘Google Translate’ for genomics, taking in genetic code and spitting out insights.

The results, he believes, will be revolutionary — and make genuine differences in people’s lives — including his own. “My daughter is graduating from university next month, my father if he were alive would be 78 years of age… I’m encouraged that we’re on the verge of having lots more grandfathers and grandmothers at the special events of all of our lives,” Perkins says. “As genomics begins the process of revolutionising human health and the practice of medicine, and opens the door to the next steps… of regenerative medicine. It’s going to be an extraordinarily exciting ride.” – Michael Rundle

 

Monday 18th May 2015

Google’s Self-Driving Cars

With humans now at the wheel, more that 30,000 people die annually in auto collisions in the U.S. That’s a staggering number. People will accept an unacceptable status quo and be concerned about the things that are new – Bryant Walker Smith

———————————————————-

It’s hard to know what’s really going on out on the streets unless you’re doing miles and miles of driving every day.

And that’s exactly what we’ve been doing with our fleet of 20+ self-driving vehicles and team of safety drivers, who’ve driven 1.7 million miles (manually and autonomously combined).

The cars have self-driven nearly a million of those miles, and we’re now averaging around 10,000 self-driven miles a week (a bit less than a typical American driver logs in a year), mostly on city streets.

Over the 6 years since we started the project, we’ve been involved in 11 minor accidents (light damage, no injuries) during those 1.7 million miles of autonomous and manual driving with our safety drivers behind the wheel, and not once was the self-driving car the cause of the accident – Chris Urmson, Director of Google’s Self-Driving Car Program

 

Monday 18th May 2015

Demis Hassabis: DeepMind is the Apollo Program for AI

Artificial intelligence (AI) pioneer Demis Hassabis has revealed his ambitions for DeepMind and explained why the AI startup “joined forces” with Google in 2014.

Speaking at a recent Google ZeitgeistMinds event, Hassabis said that DeepMind had set itself up as the Apollo Programme for AI – referencing NASA’s efforts to put a man on the moon – with its primary mission being to “solve intelligence and use it to solve everything else” through general-purpose learning machines.

“The idea behind DeepMind was really to create a kind of Apollo Programme mission for AI,” Hassabis said. “Now at DeepMind we have over 100 research scientists – 100 PhDs – top people in their machine learning fields and neuroscience fields working on solving AI.”

Earlier this year DeepMind published details of the first ever AI system capable of learning tasks independently – Anthony Cuthbertson

 

Monday 18th May

Baidu’s AI Supercomputer Beats Google at Image Recognition

Chinese search giant Baidu says it has invented a powerful supercomputer that brings new muscle to an artificial-intelligence technique giving software more power to understand speech, images, and written language.

The new computer, called Minwa and located in Beijing, has 72 powerful processors and 144 graphics processors, known as GPUs. Late Monday, Baidu released a paper claiming that the computer had been used to train machine-learning software that set a new record for recognizing images, beating a previous mark set by Google.

“Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project, speaking at the Embedded Vision Summit on Tuesday.

Minwa’s computational power would probably put it among the 300 most powerful computers in the world if it weren’t specialized for deep learning, said Wu. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.” – Tom Simonite

 

Monday 18th May 2015

Disruption of Healthcare

By 2025, existing healthcare institutions will be crushed as new business models with better and more efficient care emerge.

Thousands of startups, as well as today’s data giants (Google, Apple, Microsoft, SAP, IBM, etc.) will all enter this lucrative $3.8 trillion healthcare industry with new business models that dematerialize, demonetize and democratize today’s bureaucratic and inefficient system.

Biometric sensing (wearables) and AI will make each of us the CEOs of our own health. Large-scale genomic sequencing and machine learning will allow us to understand the root cause of cancer, heart disease and neurodegenerative disease and what to do about it. Robotic surgeons can carry out an autonomous surgical procedure perfectly (every time) for pennies on the dollar. Each of us will be able to regrow a heart, liver, lung or kidney when we need it, instead of waiting for the donor to die – Peter Diamandis

 

Monday 25th May 2015

Self-Driving Trucks

On May 6, 2015, the first self driving truck hit the American road in the state of Nevada. Self-driving trucks are no longer the future. They are the present. They’re here.

Basically, the only real barrier to the immediate adoption of self-driven trucks is purely legal in nature, not technical or economic.

With self-driving vehicles currently only road legal in a few states, many more states need to follow suit unless autonomous vehicles are made legal at the national level. And Sergey Brin of Google has estimated this could happen as soon as 2017 Scott Santens

 

Tuesday 9th June 2015

Ray Kurzweil to Release New Book in 18 Months Titled “The Singularity is Nearer”

Appearing via a telepresence robot, Ray Kurzweil took the stage at the Exponential Finance conference to address questions posed by CNBC’s Bob Pisani.

During the discussion on the future of computing and whether Moore’s law was truly in jeopardy, Kurzweil took the opportunity to announce a sequel to The Singularity Is Near aptly titled The Singularity Is Nearer, planned for release in 18 months which will include updated charts.

It’s likely that the text will also aim to showcase his prediction track record, akin to a report [PDF] he released in 2010 titled “How My predictions Are Faring.”

He also explained that his team is utilizing numerous AI techniques to deal with language and learning, and in the process, collaborating with Google’s DeepMind, the recently acquired startup that developed a neural network that learned how to play video games successfully.

“Mastering intelligence is so difficult that we need to throw everything we have at it. I think we are very much on schedule to achieve human levels of intelligence by 2029.”

David J. Hill

 

Sunday 28th June 2015

How Computers Will Crack the Genetic Code and Improve Billions of Lives

Machine learning and data science will do more to improve healthcare than all the biological sciences combined.

Human Longevity Inc. (HLI) is working on the most epic challenge — extending the healthy human lifespan.

Your genome consists of approximately 3.2 billion base pairs (your DNA) that literally code for “you.”

Your genes code for what diseases you might get, whether you are good at math or music, how good your memory is, what you look like, what you sound like, how you feel, how long you’ll likely live, and more.

This means that if we can decipher this genomic “code,” we can predict your biological future and proactively work to anticipate and improve your health.

It’s a data problem — and if you are a data scientist or machine-learning expert, it is the most challenging, interesting and important problem you could ever try to tackle.

When we compare your sequenced genome with millions of other people’s genomes AND other health data sets (see below), we can use machine learning and data mining techniques to correlate certain traits (eye color, what your face looks like) or diseases (Alzheimer’s, Huntington’s) to factors in the data and begin to develop diagnostics/therapies around them.

HlI - Biological Data

It’ a Translation Problem, Like Google Translate

With millions and millions of documents/websites/publications online that were already translated, and a crowd of 500 million users to correct and “teach” the algorithm, GT can quickly and accurately translate between 90 different languages.

Our challenge now is applying similar techniques to all of this genomic and integrated health records… and we found the perfect person to lead this effort: Franz Och — the man responsible for building Google Translate.

Franz is a renowned expert in machine learning and machine translation. He spent 10 years at Google as a distinguished research scientist and the chief architect of Google Translate, literally building the system from the ground up.

Now, Franz is Human Longevity Inc.’s chief data scientist, responsible for developing new computational methods to translate between all of the human biological information.

When you ask Franz why he’s so excited about HLI, his answer is twofold: the mission and the challenge.

Franz explains, “The big thing is the mission — the ability to affect humanity in a positive way. If you are a data scientist, why focus on making a better messaging app or better Internet advertising, when you could be advancing the understanding of disease to make sick people better and of aging to make people live longer, healthier lives?”

As far as the challenge, he goes on: “The big mission is to learn how to interpret the human genome — to be able to predict anything that can be predicted from the source code that runs us.” – Peter Diamandis

 

Wednesday 9th September 2015

Is a Cambrian Explosion Coming for Robotics?

Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth.

In Cloud Robotics—a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows.

Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. Interestingly, Li (2014) noted that one of the robotic capabilities recently enabled by these combined technologies is vision—the same capability that may have played a leading role in the Cambrian Explosion.

How soon might a Cambrian Explosion of robotics occur? It is hard to tell.

The very fast improvement of Deep Learning has been surprising, even to experts in the field. The recent availability of large amounts of training data and computing resources on the cloud has made this possible; the algorithms being used have existed for some time and the learning process has actually become simpler as performance has improved.

The timing of tipping points is hard to predict, and exactly when an explosion in robotics capabilities will occur is not clear. Commercial investment in autonomy and robotics—including and especially in autonomous cars—has significantly accelerated, with high-profile firms like Amazon, Apple, Google, and Uber, as well as.

Human beings communicate externally with one another relatively slowly, at rates on the order of 10 bits per second. Robots, and computers in general, can communicate at rates over one gigabit per second—or roughly 100 million times faster. Based on this tremendous difference in external communication speeds, a combination of wireless and Internet communication can be exploited to share what is learned by every robot with all robots.

Human beings take decades to learn enough to add meaningfully to the compendium of common knowledge. However, robots not only stand on the shoulders of each other’s learning, but can start adding to the compendium of robot knowledge almost immediately after their creation.

The online repository of visually recorded objects and human activity is a tremendous resource that robots may soon exploit to improve their ability to understand and interact with the world, including interactions with human beings. Social media sites uploaded more than 1 trillion photos in 2013 and 2014 combined, and given the growth rate may upload another trillion in 2015.

The key problems in robot capability yet to be solved are those of generalizable knowledge representation and of cognition based on that representation. How can computer memories represent knowledge to be retrieved by memory-based methods so that similar but not identical situations will call up the appropriate memories and thoughts?

Significant cues are coming from the expanding understanding of the human brain, with the rate of understanding accelerating because of new brain imaging tools. Some machine learning algorithms, like the Deep Learning approached discussed earlier, are being applied in an attempt to discover generalizable representations automatically.

It is not clear how soon this problem will be solved. It may only be a few years until robots take off—or considerably longer. Robots are already making large strides in their abilities, but as the generalizable knowledge representation problem is addressed, the growth of robot capabilities will begin in earnest, and it will likely be explosive. The effects on economic output and human workers are certain to be profound. – Gill A. Pratt

 

Saturday 12th December 2015

D-Wave: The Quantum Computing Era Has Begun

Geordie Rose is a founder and CTO of D-Wave. He is known as a leading advocate for quantum computing and physics-based processor design

Founded in 1999, D-Wave Systems is the world’s first quantum computing company. Our mission is to integrate new discoveries in physics, engineering, manufacturing, and computer science into breakthrough approaches to computation that help solve some of the world’s most complex challenges.

Despite the incredible power of today’s supercomputers, there are many complex computing problems that can’t be addressed by conventional systems. Our need to better understand everything, from the universe to our own DNA, leads us to seek new approaches to answer the most difficult questions.

While we are only at the beginning of this journey, quantum computing has the potential to help solve some of the most complex technical, commercial, scientific, and national defense problems that organizations face. We expect that quantum computing will lead to breakthroughs in science, engineering, modeling and simulation, financial analysis, optimization, logistics, and national defense applications.

Today D-Wave is the recognized leader in the development, fabrication, and integration of superconducting quantum computers. Our systems are being used by world-class organizations and institutions including Lockheed-Martin, Google, NASA, and USC. D-Wave has been granted over 110 US patents and has published over 80 peer-reviewed papers in leading scientific journals.

In 2010 we released our first commercial system, the D-Wave One™ quantum computer. We have doubled the number of qubits each year, and in 2013 we shipped our 512-qubit D-Wave Two™ system. In 2015 we announced general availability of the 1000+ qubit D-Wave 2X™ system. – D-Wave

Saturday 12th December 2015

Rose’s Law for Quantum Computers Keeps Marching On

[Update in 2015: the hardware curve that is “Rose’s Law” (blue diamonds) remains on track. The software and performance/qubit (red stars, as applied to certain tasks) is catching up, and may lag by a couple years from the original prediction overlaid onto the graph] – Steve Jurvetson

When I first met Geordie Rose in 2002, I was struck by his ability to explain complex quantum physics and the “spooky” underpinnings of quantum computers. I had just read David Deutsch’s Fabric of Reality [1997] where he predicts the possibility of such computers, and so I invited Rose to one of our tech conferences.

We first invested [in D-Wave] in 2003 , and Geordie predicted that he would be able to demonstrate a two-bit quantum computer within 6 months.

There was a certain precision to his predictions. With one bit under his belt, and a second coming, he went on to suggest that the number of qubits in a scalable quantum computing architecture should double every year. It sounded a lot like Gordon Moore’s prediction back in 1965, when he extrapolated from just five data points on a log-scale.

So I called it “Rose’s Law” and that seemed to amuse him. Well, the decade that followed has been quite amazing.

So, how do we read the graph above?

Like Moore’s Law, a straight line describes an exponential. But unlike Moore’s Law, the computational power of the quantum computer should grow exponentially with the number of entangled qubits as well. It’s like Moore’s Law compounded. (D-Wave just put together an animated visual of each processor generation in this video, bringing us to the present day.)

And now, it gets mind bending. If we suspend disbelief for a moment, and use D-Wave’s early data on processing power scaling (more on that below), then the very near future should be the watershed moment, where quantum computers surpass conventional computers and never look back. Moore’s Law cannot catch up.

A year later, it outperforms all computers on Earth combined.

Double qubits again the following year, and it outperforms the universe. What the???? you may ask… Meaning, it could solve certain problems that could not be solved by any non-quantum computer, even if the entire mass and energy of the universe was at its disposal and molded into the best possible computer. It is a completely different way to compute — as David Deutsch posits — harnessing the refractive echoes of many trillions of parallel universes to perform a computation.

First the caveat (the text in white letters on the graph).  D-Wave has not built a general-purpose quantum computer. Think of it as an application-specific processor, tuned to perform one task — solving discrete optimization problems.

This happens to map to many real world applications, from finance to molecular modeling to machine learning, but it is not going to change our current personal computing tasks. In the near term, assume it will apply to scientific supercomputing tasks and commercial optimization tasks where a heuristic may suffice today, and perhaps it will be lurking in the shadows of an Internet giant’s data center improving image recognition and other forms of near-AI magic. In most cases, the quantum computer would be an accelerating coprocessor to a classical compute cluster.

There is also the question of the programming model. Until recently, programming a quantum computer was more difficult than machine coding an Intel processor. Imagine having to worry about everything from analog gate voltages to algorithmic transforms of programming logic to something native to quantum computing (Shor and Grover and some bright minds have made the occasional mathematical breakthrough on that front).

With the application-specific quantum processor, D-Wave has made it all much easier, and with their forthcoming Black Box overlay, programming moves to a higher level of abstraction, like training a neural network with little understanding of the inner workings required.

In any case, the possibility of a curve like this begs many philosophical and cosmological questions about our compounding capacity to compute… the beginning of infinity if you will.

While it will be fascinating to see if the next three years play out like Rose’s prediction, for today, perhaps all we should say is that it’s not impossible. And what an interesting world this may be. – Steve Jurvetson, October 2012

 

Saturday 12th December 2015

The Watershed Moment: Quantum Computer Announcement from Google

Boom! Google just announced their watershed results in quantum computing using their D-Wave Two.

It is rare to see a 100,000,000x leap in computing power… at least in this universe! =)

From the D-Wave board meeting today, I learned that it cost Google $1m to run the massive computation on their classic computers. The SA and QMC (classic computers) data points cost $1m of energy, and the green curve totally choked on large problem sets (that’s why there are no green data points in the top right). The D-wave computer operating cost was well over 100x less.

Has there ever been a leap forward like this in human history? (in any thing, like computing, energy processing, transportation… I am guessing there have purely algorithmic advances of this magnitude, but having trouble thinking of a single advance of this scale) – Steve Jurvetson, December 2015

 

Tuesday 29th December 2015

The First Person to Hack the iPhone Built a Self-Driving Car… in His Garage.

This is a fantastic story on many levels.

From the self-motivated hacking to the visionary tech to the unwillingness to conform to corp interests to the “Bitcoin preferred here”

Lesson from tech history seems to be that no matter how astonishing a company may seem just wait til you see the guys working out of their garage. – Michael Goldstein

George Hotz is taking on Google and Tesla by himself.

George Hotz, the first person to hack the iPhone, says he built a self-driving car in a month. How did he do it? Bloomberg’s Ashlee Vance went to Hotz’s home to find out…

Bloomberg Businessweek

 

Tuesday 29th December 2015

Building The Quantum Dream Machine

John Martinis has been researching how quantum computers could work for 30 years. Now he could be on the verge of finally making a useful one.

With his new Google lab up and running, Martinis guesses that he can demonstrate a small but useful quantum computer in two or three years. “We often say to each other that we’re in the process of giving birth to the quantum computer industry,” he says.

The new computer would let a Google coder run calculations in a coffee break that would take a supercomputer of today millions of years.

The software that Google has developed on ordinary computers to drive cars or answer questions could become vastly more intelligent. And earlier-stage ideas bubbling up at Google and its parent company, such as robots that can serve as emergency responders or software that can converse at a human level, might become real.

As recently as last week the prospect of a quantum computer doing anything useful within a few years seemed remote. Researchers in government, academic, and corporate labs were far from combining enough qubits to make even a simple proof-of-principle machine.

A well-funded Canadian startup called D-Wave Systems sold a few of what it called “the world’s first commercial quantum computers” but spent years failing to convince experts that the machines actually were doing what a quantum computer should.

Then NASA summoned journalists to building N-258 at its Ames Research Center in Mountain View, California, which since 2013 has hosted a D-Wave computer bought by Google.

There Hartmut Neven, who leads the Quantum Artificial Intelligence lab Google established to experiment with the D-Wave machine, unveiled the first real evidence that it can offer the power proponents of quantum computing have promised.

In a carefully designed test, the superconducting chip inside D-Wave’s computer—known as a quantum annealer—had performed 100 million times faster than a conventional processor.

However, this kind of advantage needs to be available in practical computing tasks, not just contrived tests. “We need to make it easier to take a problem that comes up at an engineer’s desk and put it into the computer,” said Neven.

That’s where Martinis comes in. Neven doesn’t think D-Wave can get a version of its quantum annealer ready to serve Google’s engineers quickly enough, so he hired Martinis to do it.

“It became clear that we can’t just wait,” Neven says. “There’s a list of shortcomings that need to be overcome in order to arrive at a real technology.”

He says the qubits on D-Wave’s chip are too unreliable and aren’t wired together thickly enough. (D-Wave’s CEO, Vern Brownell, responds that he’s not worried about competition from Google.)

Google will be competing not only with whatever improvements D-Wave can make, but also with Microsoft and IBM, which have substantial quantum computing projects of their own.

But those companies are focused on designs much further from becoming practically useful. Indeed, a rough internal time line for Google’s project estimates that Martinis’s group can make a quantum annealer with 100 qubits as soon as 2017.

The difficulty of creating qubits that are stable enough is the reason we don’t have quantum computers yet. But Martinis has been working on that for more than 11 years and thinks he’s nearly there.

The coherence time of his qubits, or the length of time they can maintain a superposition, is tens of microseconds—about 10,000 times the figure for those on D-Wave’s chip.

Martinis aims to show off a complete universal quantum computer with about 100 qubits around the same time he delivers Google’s new quantum annealer, in about two years.

He thinks that once he can get his qubits reliable enough to put 100 of them on a universal quantum chip, the path to combining many more will open up. “This is something we understand pretty well,” he says. “It’s hard to get coherence but easy to scale up.”

Figuring out how Martinis’s chips can make Google’s software less stupid falls to Neven.

He thinks that the prodigious power of qubits will narrow the gap between machine learning and biological learning—and remake the field of artificial intelligence. “Machine learning will be transformed into quantum learning,” he says. That could mean software that can learn from messier data, or from less data, or even without explicit instruction.

Neven muses that this kind of computational muscle could be the key to giving computers capabilities today limited to humans. “People talk about whether we can make creative machines–the most creative systems we can build will be quantum AI systems,” he says.

Neven pictures rows of superconducting chips lined up in data centers for Google engineers to access over the Internet relatively soon.

“I would predict that in 10 years there’s nothing but quantum machine learning–you don’t do the conventional way anymore,” he says.

A smiling Martinis warily accepts that vision. “I like that, but it’s hard,” he says. “He can say that, but I have to build it.” – Tom Simonite

 

Wednesday 20th January 2016

Babylon Health Lets you Talk to a Doctor Through your Smartphone

AI-powered version of the app to be released within two months

DeepMind cofounders Demis Hassabis and Mustafa Suleyman are among a group of investors that are due to back Babylon Health with $25 million (£17 million) — an app that allows people to consult a doctor through their mobile phone.

The funding round, to be led by Swedish investment group AB Kinnevik, will reportedly allow Babylon to hire engineers and scientists that can build a version of its platform that is powered by artificial intelligence.

Hassabis and Suleyman are widely regarded as some of the most prominent minds in the field of artificial intelligence so they may be able to help develop the new AI-powered version of Babylon.

An AI-powered version of Babylon is expected to be released within the next two months. The AI version of the app will ask the patient a series of questions about their symptoms before giving them the advice they require.

Over 150,000 registered users have signed up to Babylon’s subscription health service, which allows people to have a video conference with one of the 100 doctors that are employed full time by Babylon. People can also use Babylon to book appointments and order medical tests.

Founder and CEO Ali Parsa told the FT that his company, which charges users £4.99 a month, has no intention of completely replacing doctors with machines. However, he believes there’s a case for using technology to facilitate the screening process and referring patients to trained medical staff when necessary.

“The challenge is how do you deal with the bottleneck that answers people’s questions, check their symptoms so that they don’t go to the doctor, and if they do, they go appropriately?”. – Sam Shead

 

Wednesday 20th January 2016

LEARNING / EDUCATION

Code School Udacity Promises Refunds If You Don’t Get a Job

Udacity, the online educational service founded by artificial intelligence guru and ex-Googler Sebastian Thrun, is offering a new set of tech degrees that guarantee a job in six months or your money back

The Silicon Valley-based startup is attaching this money-back guarantee to four of its online courses, courses designed to train machine learning engineers and software developers that build apps for Google Android devices, Apple iOS devices, and the web.

These online courses typically span about 9 months and require about 10 hours of study per week, and they’re priced at $299 a pop.

That’s about $100 above the company’s usual fee, but the idea is that students will also work closely with specialists that can help them prepare for interviews and find a job after their degree is complete. – Cade Metz

 

Wednesday 17th February 2016

Viv: Artificial Intelligence Virtual Personal Assistant

The company is working on what co-founder and CEo Dag Kittlaus describes as a “global brain” – a new form of voice-controlled virtual personal assistant.

With the odd flashes of personality, Viv will be able to perform thousands of tasks, and it won’t just be stuck in a phone but integrated into everything from fridges to cars. “Tell Viv what you want and it will orchestrate this massive network of services that will take care of it,” he says.

It is an ambitious project but Kittlaus isn’t without a track record. The last company he co-founded invented Siri, the original virtual assistant now standard in Apple products. Siri Inc was acquired by the tech giant for a reported $200m in 2010.

But, Kittlaus says, all these virtual assistants he helped birth are limited in their capabilities. Enter Viv. “What happens when you have a system that is 10,000 times more capable?” he asks. “It will shift the economics of the internet.”

Kittlaus pulls out his phone to demonstrate a prototype (he won’t say when Viv will launch but intimates that 2016 will be a big year). “I need a ride to the nearest pediatrician in San Jose,” he says to the phone. It produces a list of pediatricians sorted by distance and with their ratings courtesy of online doctor-booking service ZocDoc. Kittlaus taps one and the phone shows how far away the Uber is that could come and collect him. “If I click, there is going to be a car on the way,” he says. “See how those services just work together.”

He moves on to another example. “Send my mom a dozen yellow roses.” Viv can combine information in his contact list – where he has tagged his mother – with the services of an online florist that delivers across the US. Others requests Kittlaus says Viv will be able to accomplish include “On the way to my brother’s house, I need to pick up some good wine that goes well with lasagne” and “Find me a place to take my kids in the last week of March in the Caribbean”. Later I test out how well both Siri and Google’s virtual assistant perform on these examples. Neither gets far.

Viv can be different because it is being designed to be totally open, says Kittlaus. Any service, product or knowledge that any company or individual wants to imbue with a speaking component can be plugged into the network to work together with the others already in there. (Dozens of companies, from Uber to Florist One, are in the prototype). Other virtual assistants are essentially closed. Apple and only Apple, for example, decides what capabilities get integrated into Siri.

Viv’s biggest secret is the technology to bring the different services together on the fly to respond to requests for which it hasn’t been specifically programmed. “It is a program that writes its own program, which is the only way you can scale thousands of services working together that know nothing about one another,” says Kittlaus. Other personal assistants generally have their responses programmed by a developer. They are, essentially, scripted. There was no choice but to do things differently, says Kittlaus. To think of every combination of things that could be asked would be impossible. Viv will also include elements of learning; it will adapt as it comes to know your preferences.

Expect a phone app initially, says Kittlaus, but the loftier ambition is to incorporate Viv into all manner of devices, including cars. He imagines Viv’s icon becoming ubiquitous. “Anywhere you see it will mean you can talk to that thing,” he says. Of course this will require time: for companies to volunteer their services, and for users to come on board. But Kittlaus says some of the world’s largest consumer electronics companies are “very interested in plugging in”.

Viv has the potential to upend internet economics, says Kittlaus. Companies currently spend billions to advertise online with Google, and much traffic arrives based on web users’ keyword searches. But if instead requests are directed at Viv, it would cut out the middleman. The team are still exploring different business models, but one involves charging a processing fee on top of every transaction. – Zoe Corbyn

 

Wednesday 17th February 2016

Viv’s Competition: Today’s Virtual Personal Assistants

Name: Siri
Company: Apple
Communication: Voice
The original personal assistant, launched on the iPhone in 2011 and incorporated into many Apple products. Siri can answer questions, send messages, place calls, make dinner reservations through OpenTable and more.

Name: Google Now
Company: Google
Communication: Voice and typing
Available through the Google app or Chrome browser, capabilities include answering questions, getting directions and creating reminders. It also proactively delivers information to users that it predicts they might want, such as traffic conditions during commutes.

Name: Cortana
Company: Microsoft
Communication: Voice
Built into Microsoft phones and Windows 10, Cortana will help you find things on your PC, manage your calendar and track packages. It also tells jokes.

Name: Alexa
Company: Amazon
Communication: Voice
Embedded inside Amazon’s Echo, the cylindrical speaker device that went on general sale in June 2015 in the US. Call on Alexa to stream music, give cooking assistance and reorder Amazon items.

Name: M
Company: Facebook
Communication: Typing
Released in August 2015 as a pilot and integrated into Facebook Messenger, M supports sophisticated interactions but behind the scenes relies on both artificial intelligence and humans to fulfil requests, though the idea is that eventually it will know enough to operate on its own.

Zoe Corbyn

 

Wednesday 17th February 2016

Google Achieves AI ‘Breakthrough’ by Beating Go Champion

The Chinese game is viewed as a much tougher challenge than chess for computers because there are many more ways a Go match can play out.

Earlier on Wednesday, Facebook’s chief executive had said its own AI project had been “getting close” to beating humans at Go.

DeepMind’s chief executive, Demis Hassabis, said its AlphaGo software followed a three-stage process, which began with making it analyse 30 million moves from games played by humans.

It learns what patterns generally occur – what sort are good and what sort are bad. If you like, that’s the part of the program that learns the intuitive part of Go.

“It now plays different versions of itself millions and millions of times, and each time it gets incrementally better. It learns from its mistakes.

“The final step is known as the Monte Carlo Tree Search, which is really the planning stage.

“Now it has all the intuitive knowledge about which positions are good in Go, it can make long-range plans.”

“Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years,” Mr Hassabis said.

“The reasons it was quicker than people expected was the pace of the innovation going on with the underlying algorithms and also how much more potential you can get by combining different algorithms together.”

Prof Zoubin Ghahramani, of the University of Cambridge, said: “This is certainly a major breakthrough for AI, with wider implications.

“The technical idea that underlies it is the idea of reinforcement learning – getting computers to learn to improve their behaviour to achieve goals. That could be used for decision-making problems – to help doctors make treatment plans, for example, in businesses or anywhere where you’d like to have computers assist humans in decision making.”

DeepMind now intends to pit AlphaGo against Lee Sedol – the world’s top Go player – in Seoul in March.

“For us, Go is the pinnacle of board game challenges,” said Mr Hassabis. “Now, we are moving towards 3D games or simulations that are much more like the real world rather than the Atari games we tackled last year.” – BBC News

 

Sunday 13th March 2016

Investing in Robotics and AI Companies

Here are some AI (and robotics) related companies to think about.

I’m not saying you should buy them (now) or sell for that matter, but they are definitely worth considering at the right valuations.

Think about becoming an owner of AI and robotics companies while there is still time. I plan to buy some of the most obvious ones (including Google) in the ongoing market downturn (2016-2017).

Top 5 most obvious AI companies

  • Alphabet (Google)
  • Facebook (M, Deep Learning)
  • IBM (Watson, neuromorphic chips)
  • Apple (Siri)
  • MSFT (skype RT lang, emo)
  • Amazon (customer prediction; link to old article)

Yes, I’m US centric. So sue me 🙂

Other

  • SAP (BI)
  • Oracle (BI)
  • Sony
  • Samsung
  • Twitter
  • Baidu
  • Alibaba
  • NEC
  • Nidec
  • Nuance (HHMM, speech)
  • Marketo
  • Opower
  • Nippon Ceramic
  • Pacific Industrial

Private companies (*I think):

  • *Mobvoi
  • *Scaled Inference
  • *Kensho
  • *Expect Labs
  • *Vicarious
  • *Nara Logics
  • *Context Relevant
  • *MetaMind
  • *Rethink Robotics
  • *Sentient Technologies
  • *MobileEye

General AI areas to consider when searching for AI companies

  • Self-driving cars
  • Language processing
  • Search agents
  • Image processing
  • Robotics
  • Machine learning
  • Experts
  • Oil and mineral exploration
  • Pharmaceutical research
  • Materials research
  • Computer chips (neuromorphic, memristors)
  • Energy, power utilities

Mikael Syding

 

Sunday 13th March 2016

DeepMind Smartphone Assistant

The movie Her is just an easy popular mainstream view of what that sort of thing is. We would like these smartphone assistant things to actually be smart and contextual and have a deeper understanding of what you’re trying to do.

At the moment most of these systems are extremely brittle — once you go off the templates that have been pre-programmed then they’re pretty useless. So it’s about making that actually adaptable and flexible and more robust.

It’s this dichotomy between pre-programmed and learnt. At the moment pretty much all smartphone assistants are special-cased and pre-programmed and that means they’re brittle because they can only do the things they were pre-programmed for. And the real world’s very messy and complicated and users do all sorts of unpredictable things that you can’t know ahead of time.

Our belief at DeepMind, certainly this was the founding principle, is that the only way to do intelligence is to do learning from the ground up and be general.

I think in the next two to three years you’ll start seeing it. I mean, it’ll be quite subtle to begin with, certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities. – Demis Hassabis

 

Sunday 13th March 2016

Google’s Boston Dynamics Has Created the Most Human Robot Yet

Boston Dynamics just released another incredible video featuring its latest version of the humanoid robot, ATLAS that was initially developed for the DARPA Robotics Challenge.

The company says this version is the “next generation” of their humanoid, but the technological leap they have made is far from incremental.

This incredibly upgraded model of ATLAS is electrically powered and hydraulically actuated. It uses sensors in its body and legs to balance and LIDAR and stereo sensors in its head to avoid obstacles, assess the terrain, help with navigation and manipulate objects.

Seriously, the gulf between the robots featured in contemporary science fiction like Chappie and the stuff coming out of Boston Dynamics does not seem that far apart anymore. It is amazing how far the robot has been developed in the past three years! – 33rd Square

 

Sunday 13th March 2016

Google’s AI Takes Historic Match Against Go Champ

*  Machines have conquered the last games. Now comes the real world

Google’s artificially intelligent Go-playing computer system has claimed victory in its historic match with Korean grandmaster Lee Sedol after winning a third straight game in this best-of-five series.

Go is exponentially more complex than chess and requires an added level of intuition—at least among humans. This makes the win a major milestone for AI—a moment whose meaning extends well beyond a single game.

Just two years ago, most experts believed that another decade would pass before a machine could claim this prize. But then researchers at DeepMind—a London AI lab acquired by Google—changed the equation using two increasingly powerful forms of machine learning, technologies that allow machines to learn largely on their own. Lee Sedol is widely regarded as the best Go player of the past decade. But he was beaten by a machine that taught itself to play the ancient game..

The machine learning techniques at the heart of AlphaGo already drive so many services inside the Internet giant—helping to identify faces in photos, recognize commands spoken into smartphones, choose Internet search results, and much more. They could also potentially reinvent everything from scientific research to robotics

The machine plays like no human ever would—quite literally.

Using what are called deep neural networks—vast networks of hardware and software that mimic the web of neurons in the human brain—AlphaGo initially learned the game by analyzing thousands of moves from real live Go grandmasters. But then, using a sister technology called reinforcement learning, it reached a new level by playing game after game against itself, coming to recognize moves that give it the highest probability of winning.

The result is a machine that often makes the most inhuman of moves.

This happened in Game Two—in a very big way. With its 19th move, AlphaGo made a play that shocked just about everyone, including both the commentators and Lee Sedol, who needed nearly fifteen minutes to choose a response. The commentators couldn’t even begin to evaluate AlphaGo’s move, but it proved effective. Three hours later, AlphaGo had won the match.

This week’s match is so meaningful because this ancient pastime is so complex. As Google likes to say of Go: there are more possible positions on the board than atoms in a universe.

Just a few days earlier, most in the Go community were sure this wasn’t possible. But these wins were decisive. Machines have conquered the last games. Now comes the real world. – Cade Metz

 

Sunday 13th March 2016

DeepMind Founder Demis Hassabis Wants to Solve Intelligence

The aim of DeepMind is not just to beat games, fun and exciting though that is. It’s to the extent that they’re useful as a testbed, a platform for trying to write our algorithmic ideas and testing out how far they scale and how well they do and it’s just a very efficient way of doing that. Ultimately we want to apply this to big real-world problems.

We’re concentrating on the moment on things like healthcare and recommendation systems, these kinds of things.

What I’m really excited to use this kind of AI for is science, and advancing that faster. I’d like to see AI-assisted science where you have effectively AI research assistants that do a lot of the drudgery work and surface interesting articles, find structure in vast amounts of data, and then surface that to the human experts and scientists who can make quicker breakthroughs.

I was giving a talk at CERN a few months ago; obviously they create more data than pretty much anyone on the planet, and for all we know there could be new particles sitting on their massive hard drives somewhere and no-one’s got around to analyzing that because there’s just so much data. So I think it’d be cool if one day an AI was involved in finding a new particle. – Demis Hassabis

 

Sunday 13th March 2016

Does Google Deepmind’s A.I. Exhibit Super-Human Abilities? Some Japanese Pros Think So:

Tuur Demeester

 

Sunday 13th March 2016

The Power and the Mystery

At first, Fan Hui thought the move was rather odd. But then he saw its beauty.

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.

The move in question was the 37th in the second game of the historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo, an artificially intelligent computing system built by researchers at Google. Inside the towering Four Seasons hotel in downtown Seoul, the game was approaching the end of its first hour when AlphaGo instructed its human assistant to place a black stone in a largely open area on the right-hand side of the 19-by-19 grid that defines this ancient game. And just about everyone was shocked.

“That’s a very strange move,” said one of the match’s English language commentators, who is himself a very talented Go player. Then the other chuckled and said: “I thought it was a mistake.” But perhaps no one was more surprised than Lee Sedol, who stood up and left the match room. “He had to go wash his face or something—just to recover,” said the first commentator.

Even after Lee Sedol returned to the table, he didn’t quite know what to do, spending nearly 15 minutes considering his next play. AlphaGo’s move didn’t seem to connect with what had come before. In essence, the machine was abandoning a group of stones on the lower half of the board to make a play in a different area.

AlphaGo placed its black stone just beneath a single white stone played earlier by Lee Sedol, and though the move may have made sense in another situation, it was completely unexpected in that particular place at that particular time—a surprise all the more remarkable when you consider that people have been playing Go for more than 2,500 years.

The commentators couldn’t even begin to evaluate the merits of the move. – Cade Metz

* Now imagine the same ultra-competency being developed in other endeavors like medicine, law, scientific research, and war. – Veteran4Peace

 

Sunday 13th March 2016

AI Can Put Us on the Most Optimal Paths to Solving Problems

Just like this game of Go, AI will eventually start ‘thinking in ways we never conceived’ about many things like curing diseases.

There could be 50 different promising paths to curing cancer for instance but only enough funding and scientists to tackle the first 5 we think of, even if the ultimate cure is on one of the other 45.

What a fascinating idea it is that perhaps AI will think of every path and always put us on the most optimal ones. – iushciuweiush

 

Sunday 13th March 2016

What Problems Can Humans and Machines Overcome Together?

The AI research community has made incredible progress in five years.

A key insight has been that it’s much better to let computers figure out how to accomplish goals and improve through experience, rather than handcrafting instructions for every individual task. That’s also the secret to AlphaGo’s success.

The real challenges in the world are not “human versus machine,” but humans and whatever tools we can muster versus the intractable and complex problems that surround us. The most important struggles already have thousands of brilliant and dedicated people making progress on issues that affect every one of us.

Technologies such as AI will enhance our ability to respond to these pressing global challenges by providing powerful tools to aid experts make faster breakthroughs. We need machine learning to help us tame complexity, predict the unpredictable and support us as we achieve the previously impossible.

As our tools get smarter and more versatile, it’s incumbent upon us to start thinking much more ambitiously and creatively about solutions to society’s toughest global challenges. We need to reject the notion that some problems are just intractable. We can aim higher.

Consider what the world’s best clinicians or educators could achieve with machine learning tools assisting them. The real test isn’t whether a machine can defeat a human, but what problems humans and machines can overcome together. – Sundar Pichai and Demis Hassabis

 

Tuesday 29th March 2016

Self-Driving Car Startup Fights to Beat Tesla and Google

We want to ship a product by the end of the year that people will be able to install in their own cars and it will give them more self-driving capability than the Tesla today. – George Hotz

George Hotz’s pitch is that he can build self-driving car algorithms faster and better than any carmaker or even Google.

“Google is going to ship by the end of 2020? We’re actually making this stuff work,” said Hotz, who’s wearing jeans and a black hoodie with a large white comma on the front for his new company, Comma.ai.

Since he revealed his ambitions in a Bloomberg Businessweek article published last December, Hotz has attracted plenty of attention. The CEOs of Delphi, a major auto parts supplier, and Nvidia, maker of graphics processing units, have paid visits to his basement office at the “Crypto Castle,” a three-story house located in San Francisco’s Potrero Hill neighborhood and occupied by some of the city’s Bitcoin entrepreneurs.

He’s generated enough excitement to score an unannounced seed investment from venture capital firm Andreessen Horowitz that values Hotz’s tiny, fledgling company at $20 million, according to sources.

hotz2

Hotz began Comma last October and he’s well past the lone-hacker-in-the-basement stage. Yunus Saatchi, who has a PhD from the University of Cambridge in artificial intelligence, has joined as chief machine learning officer. Saatchi was a colleague of Hotz’s at Vicarious, a San Francisco-based AI startup with $72 million in financing from investors like Musk and Amazon’s Jeff Bezos.

Jake Smith, a roommate of Hotz’s in the Crypto Castle who is involved in the Bitcoin community, is head of operations. And Elizabeth Stark, another prominent fixture in the Bitcoin startup world, is Comma’s legal advisor. (They’re all wearing Comma.ai shirts when I meet them.) Hotz plans to hire around eight people total in the coming three months. He’s looking for people in machine learning and consumer hardware.

Hotz is also starting work on what will become the company’s first product — a self-driving kit that car owners will be able to purchase directly from Comma to equip their vehicles with autonomous driving capabilities. He hasn’t come close to working out the details of what this product will ultimately look like, but he said it might be a dash cam that plugs into the on-board diagnostics 2 port, which gives access to the car’s internal systems and is found in most cars made after 1996. It will provide cars with ADAS features, like lane-keeping assistance and emergency breaking.

“We believe our killer app is traffic,” Hotz said. “Humans are bad at traffic. We can make something that drives super-humanly smooth through traffic.”

Hotz said he won’t be able to turn every car into a semi-autonomous vehicle. At a minimum, the car will have to have anti-locking brakes and power steering. He’s hoping Comma’s product will work most with the five top-selling cars in the United States. – Aaron Tilley

 

Tuesday 29th March 2016

$3bill of Artificial Intelligence R&D Planned in South Korea 

After what has been dubbed the ‘AlphaGo shock’, South Korea is getting serious about artificial intelligence

South Korea, well known for its IT infrastructure, is promising 3.5 trillion won ($3 billion) in funding from the public and private sectors to develop artificial intelligence for corporate and university AI projects.

South Korea’s President Park Geun-hye assembled leaders across the country’s tech industry and senior government officials in Seoul last week to announce plans to invest the amount over the next five years.

It appears to be largely a reaction to the phenomenal performance of Google’s algorithm AlphaGo in an historic AI-versus-human game in Seoul earlier this month, which captured the South Korean media’s imagination.

“Above all, Korean society is ironically lucky, that thanks to the ‘AlphaGo shock’ we have learned the importance of AI before it is too late,” the president told local reporters assembled for the meeting, describing the game as a watershed moment of an imminent “fourth industrial revolution”.

South Korea will establish a new high-profile, public/private research centre with participation from several Korean conglomerates, including Samsung, LG, telecom giant KT, SK Telecom, Hyundai Motor, and internet portal Naver.

The institute was reportedly already in the works, but AlphaGo’s domination quickened the process of setting up the grouping. Some Korean media reports indicate that the institute could open its doors as early as 2017.

South Korea already funds two high-profile AI projects — Exobrain, which is intended to compete with IBM’s Watson computer, and Deep View, a computer vision project. – Philip Iglauer

 

Tuesday 29th March 2016

“A Hundred Years Before A Computer Beats Humans at Go”

Go fans proudly note, a computer has not come close to mastering what remains a uniquely human game.

To play a decent game of Go, a computer must be endowed with the ability to recognize subtle, complex patterns and to draw on the kind of intuitive knowledge that is the hallmark of human intelligence.

”It may be a hundred years before a computer beats humans at Go — maybe even longer,” said Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study in Princeton, N.J., and a fan of the game. ”If a reasonably intelligent person learned to play Go, in a few months he could beat all existing computer programs. You don’t have to be a Kasparov.”

When or if a computer defeats a human Go champion, it will be a sign that artificial intelligence is truly beginning to become as good as the real thing. – The New York Times, July 1997

 

Tuesday 29th March 2016

In Two Moves, AlphaGo and Lee Sedol Redefined the Future

In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence.

But in Game Four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as the move from the Google machine—no less and no more. It showed that although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own transcendent moments. And it seems that in the years to come, as we humans work with these machines, our genius will only grow in tandem with our creations.

Move 37

GW20160134040.jpg

With the 37th move in the match’s second game, AlphaGo landed a surprise on the right-hand side of the 19-by-19 board that flummoxed even the world’s best Go players, including Lee Sedol. “That’s a very strange move,” said one commentator, himself a nine dan Go player, the highest rank there is. “I thought it was a mistake,” said the other.

Lee Sedol, after leaving the match room, took nearly fifteen minutes to formulate a response. Fan Gui—the three-time European Go champion who played AlphaGo during a closed-door match in October, losing five games to none—reacted with incredulity. But then, drawing on his experience with AlphaGo—he has played the machine time and again in the five months since October—Fan Hui saw the beauty in this rather unusual move.

AlphaGo had calculated that there was a one-in-ten-thousand chance that a human would make that move. But when it drew on all the knowledge it had accumulated by playing itself so many times—and looked ahead in the future of the game—it decided to make the move anyway. And the move was genius.

Move 78

GW20160134040.jpg

In Game Four, Lee Sedol was intent on regaining some pride for himself and the tens of millions who watched the match across the globe. But midway through the game, the Korean’s prospects didn’t look good. “Lee Sedol needs to do something special,” said one commentator. “Otherwise, it’s just not going to be enough.” But after considering his next move for a good 30 minutes, he delivered something special. It was Move 78, a “wedge” play in the middle of the board, and it immediately turned the game around.

As we found out after the game, AlphaGo made a disastrous play with its very next move, and just minutes later, after analyzing the board position, the machine determined that its chances of winning had suddenly fallen off a cliff.

Commentator and nine dan Go player Michael Redmond called Lee Sedol’s move brilliant: “It took me by surprise. I’m sure that it would take most opponents by surprise. I think it took AlphaGo by surprise.”

Among Go players, the move was dubbed “God’s Touch.” It was high praise indeed. But then the higher praise came from AlphaGo.

One in Ten Thousand – Again

The next morning, as he walked down the main boulevard in Sejong Daero just down the street from the Four Seasons, I discussed the move with Demis Hassabis, who oversees the DeepMind Lab and was very much the face of AlphaGo during the seven-day match. As we walked, the passers-by treated him like a celebrity—and indeed he was, after appearing in countless newspapers and on so many TV news shows. Here in Korea, where more than 8 million people play the game of Go, Lee Sedol is a national figure.

Hassabis told me that AlphaGo was unprepared for Lee Sedol’s Move 78 because it didn’t think that a human would ever play it. Drawing on its months and months of training, it decided there was a one-in-ten-thousand chance of that happening. In the other words: exactly the same tiny chance that a human would have played AlphaGo’s Move 37 in Game Two.

The symmetry of these two moves is more beautiful than anything else. One-in-ten-thousand and one-in-ten-thousand. This is what we should all take away from these astounding seven days. Hassabis and Silver and their fellow researchers have built a machine capable of something super-human. But think about what happens when you put these two things together. Human and machine. Fan Hui will tell you that after five months of playing match after match with AlphaGo, he sees the game completely differently. His world ranking has skyrocketed. And apparently, Lee Sedol feels the same way. Hassabis says that he and the Korean met after Game Four, and that Lee Sedol echoed the words of Fan Hui. Just these few matches with AlphaGo, the Korean told Hassabis, have opened his eyes.

This isn’t human versus machine. It’s human and machine. Move 37 was beyond what any of us could fathom. But then came Move 78. And we have to ask: If Lee Sedol hadn’t played those first three games against AlphaGo, would he have found God’s Touch? The machine that defeated him had also helped him find the way. – Cade Metz

 

Sunday 24th April 2014

George Hotz Scores $3.1m Investment for Self-Driving Car Startup Comma.ai

Comma hopes to sell road-worthy consumers car-automation ‘conversion kits’ for less than $1,000

Comma, has received $3.1m from well-known investment firm Andreessen Horowitz to make conversion kits that turn normal cars into semi-self-driving cars. Hotz plans to start selling these by the end of the year for Honda, Acura and potentially other brands.

For many consumers, automated vehicles still feel like science fiction and the province of giant research labs at Google, Uber and General Motors (GM). But there’s increasing evidence that many drivers’ first interaction with a self-driving vehicle will be one engineered by a small startup. Some of these companies are making automated public shuttles, or exploring ways to make existing cars autonomous in certain circumstances.

“We are going to win self-driving cars,” Hotz said in a recent interview. “The bar is low.”

That might seem like bold talk from a twentysomething who quit his day job at an artificial intelligence company last summer. But Hotz isn’t shy of attention. He recently challenged Tesla founder Elon Musk to a race to build the first vehicle that can navigate San Francisco’s tourist-packed Golden Gate Bridge on its own.

“I think we can maybe build better self-driving cars,” Hotz says. “He can build a better rocket.”

George Hotz’s Elon Musk dartboard (Photo credit: Chad McClymonds)

When asked what he would do with his new venture funds, Hotz said he would focus on hiring the best machine-learning programmers he could find. “Who I really want to hire is 20 more copies of me,” he says.

In December, Hotz made a name for himself when he showed Bloomberg Businessweek how he made an Acura drive itself down the highway. Hotz had hacked the car’s onboard computer. He then added a camera and a radar. Suddenly, the vehicle was cruising down Bay Area freeways as Hotz sat in the driver seat, his hands not on the steering wheel.

By the end of the year, Comma wants to sell consumers car-automation conversion kits for less than $1,000. Hotz is tight-lipped about what those will involve, but they will at least require some sort of alterations to a car’s onboard computer and hardware for the car to determine what’s going on around it.

Car automation has become increasingly democratized as much of the hardware behind the technology has fallen in price and the machine-learning techniques have been open-sourced. – Danny Yadron

 

Sunday 24th April 2016

AI Hits the Mainstream

Insurance, finance, manufacturing, oil and gas, auto manufacturing, health care: these may not be the industries that first spring to mind when you think of artificial intelligence. But as technology companies like Google and Baidu build labs and pioneer advances in the field, a broader group of industries are beginning to investigate how AI can work for them, too.

Today the industry selling AI software and services remains a small one. Dave Schubmehl, research director at IDC, calculates that sales for all companies selling cognitive software platforms —excluding companies like Google and Facebook, which do research for their own use—added up to $1 billion last year.

He predicts that by 2020 that number will exceed $10 billion. Other than a few large players like IBM and Palantir Technologies, AI remains a market of startups: 2,600 companies, by Bloomberg’s count.

General Electric is using AI to improve service on its highly engineered jet engines. By combining a form of AI called computer vision (originally developed to categorize movies and TV footage when GE owned NBC Universal) with CAD drawings and data from cameras and infrared detectors, GE has improved its detection of cracks and other problems in airplane engine blades.

The system eliminates errors common to traditional human reviews, such as a dip in detections on Fridays and Mondays, but also relies on human experts to confirm its alerts. The program then learns from that feedback, says Colin Parris, GE’s vice president of software research. – Nanette Byrnes

 

Sunday 24th April 2016

Neural Networks: Why AI Development is Going to Get Even Faster

The pace of development of artificial intelligence is going to get faster. And not for the typical reasons — More money, interest from megacompanies, faster computers, cheap & huge data, and so on. Now it’s about to accelerate because other fields are starting to mesh with it, letting insights from one feed into the other, and vice versa.

Neural networks are drawing sustained attention from researchers across the academic spectrum.  “Pretty much any researcher who has been to the NIPS Conference [a big AI conference] is beginning to evaluate neural networks for their application,” says Reza Zadeh, a consulting professor at Stanford. That’s going to have a number of weird effects.

People like neural networks because they basically let you chop out a bunch of hand-written code in favor of feeding inputs and outputs into neural nets and getting computers to come up with the stuff in-between. In technical terms, they infer functions.

Robotics has just started to get into neural networks. This has already sped up development. This year, Google demonstrated a system that teaches robotic arms to learn how to pick up objects of any size and shape. That work was driven by research conducted last year at Pieter Abbeel’s lab in Berkeley, which saw scientists combine two neural network-based techniques (reinforcement learning and deep learning) with robotics to create machines that could learn faster.

More distant communities have already adapted the technology to their own needs. Brendan Frey runs a company called Deep Genomics, which uses machine learning to analyze the genome. Part of the motivation for that is that humans are “very bad” at interpreting the genome, he says. Modern machine learning approaches give us a way to get computers to analyze this type of mind-bending data for us. “We must turn to truly superhuman artificial intelligence to overcome our limitations,” he says.

One of the reasons why so many academics from so many different disciplines are getting involved is that deep learning, though complex, is surprisingly adaptable. “Everybody who tries something seems to get things to work beyond what they expected,” says Pieter Abbeel. “Usually it’s the other way around.”

Oriol Vinyals, who came up with some of the technology that sits inside Google Inbox’s ‘Smart Reply‘ feature, developed a neural network-based algorithm to plot the shortest routes between various points on a map. “In a rather magical moment, we realized it worked,” he says. This generality not only encourages more experimentation but speeds up the development loop as well.

One challenge: though neural networks generalize very well, we still lack a decent theory to describe them, so much of the field proceeds by intuition. This is both cool and extremely bad. “It’s amazing to me that these very vague, intuitive arguments turned out to correspond to what is actually happening,” says Ilya Sutskever, research director at OpenAI., of the move to create ever-deeper neural network architectures. Work needs to be done here. “Theory often follows experiment in machine learning,” says Yoshua Bengio, one of the founders of the field.

My personal intuition is that deep learning is going to make its way into an ever-expanding number of domains. Given sufficiently large datasets, powerful computers, and the interest of subject-area experts, the Deep Learning tsunami, looks set to wash over an ever-larger number of disciplines. – Jack Clark

 

Sunday 24th April 2016

A $2 Billion Chip to Accelerate Artificial Intelligence

Two years ago we were talking to 100 companies interested in using deep learning. This year we’re supporting 3,500. In two years there has been 35X growth. – Jen-Hsun Huang, CEO of Nvidia

The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.

Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning. This technique has produced recent major advances such as the Google software AlphaGo that defeated the world’s top Go player last month.

Deep learning involves passing data through large collections of crudely simulated neurons. The P100 could help deliver more breakthroughs by making it possible for computer scientists to feed more data to their artificial neural networks or to create larger collections of virtual neurons.

Artificial neural networks have been around for decades, but deep learning only became relevant in the last five years, after researchers figured out that chips originally designed to handle video-game graphics made the technique much more powerful. Graphics processors remain crucial for deep learning, but Nvidia CEO Jen-Hsun Huang says that it is now time to make chips customized for this use case.

At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang.

It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia’s previous best chip.

Deep-learning researchers from Facebook, Microsoft, and other companies that Nvidia granted early access to the new chip said they expect it to accelerate their progress by allowing them to work with larger collections of neurons.

“I think we’re going to be able to go quite a bit larger than we have been able to in the past, like 30 times bigger,” said Bryan Catanzero, who works on deep learning at the Chinese search company Baidu. Increasing the size of neural networks has previously enabled major jumps in the smartness of software. For example, last year Microsoft managed to make software that beats humans at recognizing objects in photos by creating a much larger neural network.

Huang of Nvidia said that the new chip is already in production and that he expects cloud-computing companies to start using it this year. IBM, Dell, and HP are expected to sell it inside servers starting next year. – Tom Simonite

 

Saturday 18th June 2016

Elon Musk: We Are Less Than Two Years From Complete Car Autonomy

The Tesla CEO spoke at the Code Conference and predicted that we’re closer to self-driving cars than anybody thinks.

“I think we are less than two years away from complete autonomy, safer than humans, but regulations should take at least another year,” Musk said.

While many auto and tech companies–from Google to Uber and GM to Lyft and Apple to Ford–are researching and testing autonomous vehicles, Tesla seems on the verge of announcing that its Model 3 consumer sedan will have full self-driving capabilities.

Musk did not confirm that feature, but when asked multiple times on stage, he replied that there would be another Tesla event later in the year in which he would have more details.

The only thing he would say is that Tesla would do “the obvious thing”–seemingly a reference to a prior comment he made about autonomous driving being a must have feature for future vehicles. – Brian Soloman

 

Saturday 18th June 2016

The Singularity is Near

When Ray Kurzweil published The Singularity Is Near in 2006, many scoffed at his outlandish predictions.

A year before Apple launched its iPhone, Kurzweil imagined a world in which humans and computers essentially fuse, unlocking capabilities we normally see in science fiction movies.

He pointed out that as technology accelerates at an exponential rate, progress would eventually become virtually instantaneous—a singularity. Further, he predicted that as computers advanced, they would merge with other technologies, namely genomics, nanotechnology and robotics.

Today, Kurzweil’s ideas don’t seem quite so outlandish. Google’s DeepMind recently beat legendary Go world champion Lee Sedol. IBM’s Watson is expanding horizons in medicine, financial planning and even cooking. Self driving cars are expected to be on the road by 2020.

Just as Kurzweil predicted, technology seems to be accelerating faster than ever before. – Greg Satell

 

Monday 8th August 2016

New Boston Dynamics Robot: Introducing SpotMini

 

 

Monday 8th August 2016

Google Turns on DeepMind AI, Cuts Cooling Energy Bill by 40%

  • This alone probably pays for the Deepmind acquisition. Shows how far below Pareto optimal limits even Google was. – Balaji S. Srinivasan
  • The electricity consumption of data centers is on a pace to account for 12 percent of global electricity consumption by 2017. – Dan Heilman

Google Uses AI To Cool Data Centers and Slash Electricity Use

  • DeepMind controls about 120 variables in the data centers. The fans and the cooling systems and so on, and windows and other things. They were pretty astounded. – Demis Hassabis, DeepMind Co-Founder
  • I don’t think most grasp the significance of this. Oil companies have similar systems they pay billions trying to optimize. – iandanforth
  • It’s all about optimization. It can be used in supply logistics, shipping logistics and dynamic pricing in addition to keeping an industrial area at the right temperature. We’ll be seeing AI being applied to a lot more areas. – Dave Schubmehl
  • Honestly, I’m skeptical a generalized AI will go fully conscious in my lifetime. But these specialized AI? These things are going to start changing our lives over the next ten years in unimaginable ways. The energy savings alone is incredible. – tendimensions

The amount of energy consumed by big data centers has always been a headache for tech companies.

Keeping the servers cool as they crunch numbers is such a challenge that Facebook even built one of its facilities on the edge of the Arctic Circle.

Well, Google has a different solution to this problem: putting its DeepMind artificial intelligence unit in charge and using AI to manage power usage in parts of its data centers.

The results of this experiment? A 40 percent reduction in the amount of electricity needed for cooling, which Google describes as a “phenomenal step forward.”

A typical day of testing, including when machine learning recommendations were turned on, and when they were turned off.

The AI worked out the most efficient methods of cooling by analyzing data from sensors among the server racks, including information on things like temperatures and pump speeds.

DeepMind’s engineers say the next step is identify where new data is needed to calculate further efficiencies, and to deploy sensors in those areas.

And the company won’t stop with Google’s data centers. “Because the algorithm is a general-purpose framework to understand complex dynamics, we plan to apply this to other challenges in the data centre environment and beyond in the coming months,” said DeepMind in a blog post.

“Possible applications of this technology include improving power plant conversion efficiency […], reducing semiconductor manufacturing energy and water usage, or helping manufacturing facilities increase throughput.” – James Vincent

 

Thursday 18th August 2016

The Real Bubble

I laugh when people say tech is a bubble. The establishment is the bubble. Who’s around in 2025 – Google or the EU? – Balaji S. Srinivasan

 

Thursday 18th August 2016

The AI Gold Rush

Companies are lining up to supply shovels to participants in the AI gold rush.

The name that comes up most frequently is NVIDIA (NASDAQ: NVDA), says Chris Dixon of Andreessen Horowitz; every AI startup seems to be using its GPU chips to train neural networks.

GPU capacity can also be rented in the cloud from Amazon (NASDAQ: AMZN) and Microsoft (NASDAQ: MSFT).

IBM (NYSE: IBM) and Google, meanwhile, are devising new chips specifically built to run AI software more quickly and efficiently.

And Google, Microsoft and IBM are making AI services such as speech recognition, sentence parsing and image analysis freely available online, allowing startups to combine such building blocks to form new AI products and services.

More than 300 companies from a range of industries have already built AI-powered apps using IBM’s Watson platform, says Guru Banavar of IBM, doing everything from filtering job candidates to picking wines. – The Economist

 

Thursday 18th August 2016

Most Active Investors in Artificial Intelligence

1 – Intel (NASDAQ:INTC)

2 – Google (NASDAQ: GOOGL)

3 – GE (NYSE: GE)

4 – Samsung (005930.KS)

Artificial intelligence dealmaking has exploded recently, leaping to a new quarterly record of over 140 deals in Q1’16.

1 – Intel Capital is the most active corporate investor on our list, having backed  over a dozen separate unique AI-based companies, including healthcare startup Lumiata, machine-learning platform DataRobot, and imaging startup Perfant Technology.

2 – Google Ventures, which backed over 10 unique companies, ranked second as an active investor in AI. Google is also a major acquirer of AI startups.

CB Insights

 

Friday 30th September 2016

Chris Dixon: In 2 years Everyone Will Use Driverless Cars on Highways

Within ten years, roads will be full of driverless cars.

Maybe within two, depending on where you’re driving.

That’s what Chris Dixon, a partner at prestigious Silicon Valley investment firm Andreessen Horowitz believes.

Dixon has written extensively about the future of autonomous vehicles and invested in a number of startups in the space, from self-flying delivery drones to Comma.ai, a company founded by a young man who built a self-driving car in his garage.

“All of the trends we’ve been observing over the last decade — from cloud computing to cheaper processing — have hit a tipping point,” Dixon says. “This is the core that’s getting people excited about AI, and specifically around autonomous vehicles and autonomous cars.”

Tesla autopilot

It’s also cheaper than ever to build a smart car. Dixon says many driverless car companies use tiny chips made by a publicly-traded company, NVIDIA. NVIDIA’s chips only cost a couple hundred dollars.

“For $200, you could get what 10 years ago was a supercomputer on a little board and put it in your car, and it can run one of these sophisticated deep learning systems,” he says.

Additionally, a lot of the AI for autonomous vehicles is open-sourced, like Google’s product TensorFlow. This allows everyone in the space to create more accurate technology faster, because they can learn from each other’s data sets and build off the findings.

” I bet in two years, it will be the norm that on the highway, you’re not driving half the time or you’ll be using driver assistants heavily,” he says.

“It’s easier on highways and in suburbs,” says Dixon. “So you can imagine pushing a button on your Uber or Lyft app, and depending on the situation and location, an autonomous car comes or a person comes.”

He adds, “When will an Uber roll up without a person in it in New York City? That’s farther away. But I think that’s more like five years away, not 20.”

Dixon likens the promise of self-driving to Henry Ford’s Model T, which was like the iPhone of the time — a real technology game changer. At first, consumer cars seemed impossible — roads weren’t paved and no one knew how to drive cars. But the product was a hit, and everything changed to make way for them. – Alyson Shontell

 

Friday 30th September 2016

Amazon Alexa

Image result for amazon alexa

Every once in a while, a product comes along that changes everyone’s expectations of what’s possible in user interfaces. The Mac. The World Wide Web. The iPhone.

Alexa belongs in that elite group of game changers.

Siri didn’t make it over the hump, despite the buzz it created.

Neither did Google Now or Cortana, despite their amazing capabilities and their progress in adoption. (Mary Meeker reports that 20% of Google searches on mobile are now done by voice.)

But Alexa has done so many things right that everyone else has missed that it is, to my mind, the first winning product of the conversational era. Google should be studying Alexa’s voice UI and emulating it.

Image result for alexa amazon

Human-Computer Interaction takes big leaps every once in a while. The next generation of speech interfaces is one of those leaps.

Humans are increasingly going to be interacting with devices that are able to listen to us and talk back (and increasingly, they are going to be able to see us as well, and to personalize their behavior based on who they recognize).

And they are going to get better and better at processing a wide range of ways of expressing intention, rather than limiting us to a single defined action like a touch, click, or swipe.

Alexa gives us a taste of the future, in the way that Google did around the turn of the millennium. We were still early in both the big data era and the cloud era, and Google was seen as an outlier, a company with a specialized product that, while amazing, seemed outside the mainstream of the industry. Within a few years, it WAS the mainstream, having changed the rules of the game forever.

What Alexa has shown us that rather than trying to boil the ocean with AI and conversational interfaces, what we need to do is to apply human design intelligence, break down the conversation into smaller domains where you can deliver satisfying results, and within those domains, spend a lot of time thinking through the “fit and finish” so that interfaces are intuitive, interactions are complete, and that what most people try to do “just works.” – Tim O’Reilly

 

Friday 30th September 2016

Zurich Will Become Google’s Biggest AI Research Centre Outside the US

Image result for zurich

Google is extending its push into artificial intelligence with a new European research center dedicated to advancing the technology.

Based in Zurich, the team will focus on three areas – machine learning, natural language understanding and computer perception.

Emmanuel Mogenet, who will head the unit, said much of the research would be on teaching machines common sense.

There was, he said, “no limit on how big I grow the team”.

“We are very ambitious in terms of growth. The only limiting factor will be talent,” he told journalists gathered in Zurich to hear more about Google’s AI plans. – BBC

 

Wednesday 19th October 2016

Google’s DeepMind Achieves Speech-Generation Breakthrough

Google’s DeepMind unit, which is working to develop super-intelligent computers, has created a system for machine-generated speech that it says outperforms existing technology by 50 percent.

U.K.-based DeepMind, which Google acquired for about 400 million pounds ($533 million) in 2014, developed an artificial intelligence called WaveNet that can mimic human speech by learning how to form the individual sound waves a human voice creates, it said in a blog post Friday.

In blind tests for U.S. English and Mandarin Chinese, human listeners found WaveNet-generated speech sounded more natural than that created with any of Google’s existing text-to-speech programs, which are based on different technologies. WaveNet still underperformed recordings of actual human speech.

Tech companies are likely to pay close attention to DeepMind’s breakthrough. Speech is becoming an increasingly important way humans interact with everything from mobile phones to cars. Amazon.com Inc., Apple Inc., Microsoft Inc. and Alphabet Inc.’s Google have all invested in personal digital assistants that primarily interact with users through speech.

Mark Bennett, the international director of Google Play, which sells Android apps, told an Android developer conference in London last week that 20 percent of mobile searches using Google are made by voice, not written text.

And while researchers have made great strides in getting computers to understand spoken language, their ability to talk back in ways that seem fully human has lagged. – Jeremy Kahn

 

Wednesday 30th November 2016

Building an AI Portfolio

The following stocks offer exposure to Artificial Intelligence. – Lee Banfield

——————————-

Google (NASDAQ: GOOGL)

Stock Price: $776

Market Cap: $531 billion

Image result for google deepmind

Healthcare Images – Google Deepmind

Machine Learning – GoogleML

Autonomous Systems – Google Self-driving Car

Hardware – GoogleTPU

Open Source Library – TensorFlow

IBM (NYSE: IBM)

Stock Price: $162

Market Cap: $154 billion

Image result for ibm watson

Enterprise Intelligence – IBM Watson

Healthcare – IBM Watson Health

Amazon (NASDAQ: AMZN)

Stock Price: $752

Market Cap: $355 billion

Image result for amazon alexa logo

Personal Assistant – Amazon Alexa

Open Source Library – DSSTNE

Microsoft (NASDAQ: MSFT)

Stock Price: $60

Market Cap: $473 billion

Image result for cortana logo

Personal Assistant – Cortana

Open Source Libraries – CNTK, AzureML, DMTK

Nvidia (NASDAQ: NVDA)

Stock Price: $94

Market Cap: $50 billion

Image result for nvidia

Hardware

Samsung (SSNLF:US)

Stock Price: $1,250

Market Cap: $176 billion

Image result for viv ai

Personal Assistant – Viv

Qualcomm (NASDAQ: QCOM)

Stock Price: $68

Market Cap: $100 billion

Image result for qualcomm logo

Hardware

Tesla (NASDAQ: TSLA)

Stock Price: $188

Market Cap: $29 billion

Image result for tesla logo

Autonomous Vehicles

Illumina (NASDAQ: ILMN)

Stock Price: $133

Market Cap: $20 billion

Image result for illumina grail

Healthcare, Cancer Detection – Grail

Mobileye (NYSE: MBLY)

Stock Price: $37

Market Cap: $8 billion

Image result for mobileye

Autonomous Vehicles

 

Wednesday 14th December 2016

Full Self-Driving Hardware Becoming Available on All Tesla Cars

This is huge news. It was just a few years ago that the sensors/cameras used on the Google cars were over $100,000 to achieve level 3 autonomy.

To have the hardware component installed on all Tesla cars (including the $35k Model 3) moving forward happened years ahead of when I feel most of us that follow autonomous vehicle tech would have imagined. From a tech perspective, this is mind-blowing news. – Nathan Wright

Musk announced that all Tesla cars being produced as of today, including the Model 3, will have everything they need onboard to achieve full Level 5 self-driving in the future.

The biggest change might be the new onboard computer that provides over 40 times the processing power of the existing Tesla hardware, which actually runs the in-house neural net the car maker has developed in order to handle processing of data inbound from the vision, sonar and radar systems.

Musk said on call discussing the most recent update to the existing driver assistance Autopilot software that it basically stretched computing power to the limit, which is why the upgraded CPU is required for full Level 5 autonomy. The new GPU is the Nvidia Titan, Musk said on the call, though it was a “tight call” between Nvidia and AMD.

The validation required for full autonomy will still take some more time, but Musk said on a call that it’s actually already looking like it’ll be at least two times as safe as human driving based on existing testing. – Darrell Etherington

 

Wednesday 14th December 2016

OpenAI Releases Universe, an Open Source Platform for Training AI

Related image

  • If Universe works, computers will use video games to learn how to function in the physical world. – Wired
  • Universe is the most intricate and intriguing tech I’ve worked on. – Greg Brockman

OpenAI, the billion-dollar San Francisco artificial intelligence lab backed by Tesla CEO Elon Musk, just unveiled a new virtual world.

This isn’t a digital playground for humans. It’s a school for artificial intelligence. It’s a place where AI can learn to do just about anything.

Other AI labs have built similar worlds where AI agents can learn on their own. Researchers at the University of Alberta offer the Atari Learning Environment, where agents can learn to play old Atari games like Breakout and Space Invaders.

Microsoft offers Malmo, based on the game Minecraft. And earlier this month, Google’s DeepMind released an environment called DeepMind Lab.

But Universe is bigger than any of these. It’s an AI training ground that spans any software running on any machine, from games to web browsers to protein folders.

“The domain we chose is everything that a human can do with a computer,” says Greg Brockman, OpenAI’s chief technology officer.

In theory, AI researchers can plug any application into Universe, which then provides a common way for AI “agents” to interact with these applications. That means researchers can build bots that learn to navigate one application and then another and then another. – Cade Metz

 

Saturday 24th December 2016

Huge Improvements as Google Translate Converts to AI Based System

  • The A.I. system made overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.
  • The rollout includes translations between English and Spanish, French, Portuguese, German, Chinese, Japanese, Korean and Turkish. The rest of Translate’s hundred-odd languages are to come, with the aim of eight per month.
  • The Google Translate team had been steadily adding new languages and features to the old system, but gains in quality over the last four years had slowed considerably.

Image result for google brain

Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media.

Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.

Rekimoto promoted his discovery to his hundred thousand or so followers on Twitter, and over the next few hours thousands of people broadcast their own experiments with the machine-translation service.

As dawn broke over Tokyo, Google Translate was the No. 1 trend on Japanese Twitter, just above some cult anime series and the long-awaited new single from a girl-idol supergroup. Everybody wondered: How had Google Translate become so uncannily artful?

Image result for google translate

Google Translate’s side-by-side experiment to compare the new system with the old one. 

Schuster wanted to run a for English-French, but Hughes advised him to try something else. “English-French,” he said, “is so good that the improvement won’t be obvious.”

It was a challenge Schuster couldn’t resist. The benchmark metric to evaluate machine translation is called a BLEU score, which compares a machine translation with an average of many reliable human translations.

At the time, the best BLEU scores for English-French were in the high 20s. An improvement of one point was considered very good; an improvement of two was considered outstanding.

The neural system, on the English-French language pair, showed an improvement over the old system of seven points. Hughes told Schuster’s team they hadn’t had even half as strong an improvement in their own system in the last four years.

To be sure this wasn’t some fluke in the metric, they also turned to their pool of human contractors to do a side-by-side comparison. The user-perception scores, in which sample sentences were graded from zero to six, showed an average improvement of 0.4 — roughly equivalent to the aggregate gains of the old system over its entire lifetime of development.

In mid-March, Hughes sent his team an email. All projects on the old system were to be suspended immediately.

Gideon Lewis-Kraus, The Great AI Awakening

 

Thursday 26th January 2017

THE SINGULARITY

Image result for google deepmind

Go World Champ Crushed by AI: “Not a Single Human Has Touched the Edge of the Truth of Go.”

A mysterious character named “Master” has swept through China, defeating many of the world’s top players in the ancient strategy game of Go.

Master played with inhuman speed, barely pausing to think. With a wide-eyed cartoon fox as an avatar, Master made moves that seemed foolish but inevitably led to victory this week over the world’s reigning Go champion, Ke Jie of China.

Master later revealed itself as an updated version of AlphaGo, an artificial-intelligence program designed by the DeepMind unit of Alphabet Inc.’s Google.

It was dramatic theater, and the latest sign that artificial intelligence is peerless in solving complex but defined problems. AI scientists predict computers will increasingly be able to search through thickets of alternatives to find patterns and solutions that elude the human mind.

Master’s arrival has shaken China’s human Go players.

“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”

Master’s record—60 wins, 0 losses over seven days ending Wednesday—led virtuoso Go player Gu Li to wonder what other conventional beliefs might be smashed by computers in the future. – Eva Dou and Olivia Geng

 

Thursday 26th January 2017

Quantum Computers Advancing Much Faster Than Expected

  • Development of a quantum computer, if practical, would mark a leap forward in computing capability far greater than that from the abacus to a modern day supercomputer, with performance gains in the billion-fold realm and beyond. – Margaret Rouse

In January 2015 I wrote an article that explored how the rate at which engineering advances were being announced would have a profound and largely unexpected impact on the timeline for the development of Quantum Computers and Quantum Technologies in general.

In the 20 months or so since I wrote that article, a great deal has happened, and even my most optimistic and unrealistic sounding projections have proven to be far too cautious. There have been so many announcements about Quantum Computing and Quantum Technologies that it is sometimes hard to keep with developments.

A startling series of engineering advances and the slow realisation that large corporations have been allocating serious resources to this sector has meant that a quantum computer is no longer seen as belonging on the pages of a science fiction novel.

Image result for qubits

With the benefit of hindsight I still think the single most important major seismic shift in the sector in the recent past occurred when Google took the initiative with investing in, and backing the John Martinis’ team in 2014.

A direct result of that move and the enormous investment that has been put behind their effort was the disclosure that Google believe they will achieve “quantum supremacy” within a year. The best analysis of Google’s amazing journey thus far is revealed (in my view) in this New Scientist article.

When quantum computers emerge into common usage (and it is now only a question of when, not if), the machines will benefit from developments in quantum mechanics that span the post second world war period.

Despite this, the vast majority of us are blissfully unaware of these developments that have been described as having an influence on human kind that will ultimately rival that of the industrial revolution. Perhaps Arkwright’s machine and Stephenson’s engine were equally remote from people’s everyday lives when they were first unveiled.

This is an exciting time, to say the least, for anyone with an interest in the sector from a commercial or academic standpoint. For those of us who have been involved in the sector for a while, the past 18 months or so have become a bit of haze. We try to get used to a certain reality, and another set of circumstances then becomes available. – Ilyas Khan

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: