Microsoft AI

Monday 24th February 2014

Speech Recognition

Speak in English, they hear in Chinese in your voice, and vice versa. Demo already live

Today, translation is happening in textboxes. But wait till you click someone’s Oculus avatar to suddenly hear them speaking your tongue. You might even imagine a programmable Jawbone headset which would allow the same realtime translation to happen in the real world. – Balaji S. Srinivasan, Partner at Andreessen Horowitz

 

Friday 25th April 2014

The Biggest Event in Human History

Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

There are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. – Stephen Hawking

 

Monday 2nd June 2014

Microsoft

Skype video phone call at codecon with real time voice translation German to English. Awesome. Future. Well done.

Skype voice language translation product will be launched this year – Mark Suster, Upfront Ventures

I can’t help but be reminded of Google no longer understanding how its systems are learning to identify objects in photos so accurately – the technology is hugely impressive and, in developing a mind of its own, kind of disturbing – David Meyer

 

Tuesday 8th July 2014

Deep Learning

When asked what he would focus on if he were currently a computer science student, and what would be the most significant type of technology in coming years, Bill Gates’ answer focused on machine learning:

“The ultimate is computers that learn. So called deep learning which started at Microsoft and is now being used by many researchers looks like a real advance that may finally learn.”

There has already been more progress in the last three years in video and audio recognition than had ever been made before, he says. This encompasses everything from basic machine-learning algorithms to ones that will one day be able to read a book and understand what it means – Max Nisen

 

Wednesday 20th August 2014

The Quest to Build an Artificial Brain

Deep learning has suddenly spread across the commercial tech world, from Google to Microsoft to Baidu to Twitter, just a few years after most AI researchers openly scoffed at it.

All of these tech companies are now exploring a particular type of deep learning called convolutional neural networks, aiming to build web services that can do things like automatically understand natural language and recognize images. At Google, “convnets” power the voice recognition system available on Android phones. At China’s Baidu, they drive a new visual search engine.

But this is just a start. The deep learning community are working to improve the technology. Today’s most widely used convolutional neural nets rely almost exclusively on supervised learning. Basically, that means that if you want it to learn how to identify a particular object, you have to label more than a few examples. Yet unsupervised learning—or learning from unlabeled data—is closer to how real brains learn, and some deep learning research is exploring this area.

“How this is done in the brain is pretty much completely unknown. Synapses adjust themselves, but we don’t have a clear picture for what the algorithm of the cortex is,” says LeCun. “We know the ultimate answer is unsupervised learning, but we don’t have the answer yet.” – Daniela Hernandez

 

Monday 10th November 2014

Skype Translator: Breaking Down Language Barriers

The Skype Translator, derived from decades of research in speech recognition, automatic translation, and machine learning technologies. We are on the verge of having a tool, available to all, that allows us to speak universally with anyone on the planet.

Imagine being able to speak in German, and have your message conveyed grammatically and semantically correct in English. That future is here. – Peter Lee, Microsoft Research VP

 

Monday 29th December 2014

Skype’s Real Life Babel Fish Translates English/Spanish in Real Time

Microsoft has released its first preview of Skype Translator, which allows real-time conversations between spoken English and Spanish and will be extended to more languages.

It is now available as a free download for Windows 8.1, starting with spoken English and Spanish along with more than 40 text-based languages for instant messaging.

Gurdeep Pall, Microsoft’s corporate vice-president of Skype and Lync, said in a blog post that Skype Translator would “open up endless possibilities”, adding: “Skype Translator relies on machine learning, which means that the more the technology is used, the smarter it gets. We are starting with English and Spanish, and as more people use the Skype Translator preview with these languages, the quality will continually improve.”

Skype Translator is part of Microsoft’s artificial intelligence research relying on machine learning and deep neural networks, much like Google and Apple’s voice assistants. It can understand speech and then rapidly translate it into another language before using text-to-speech systems to speak the translation back to the user, or in this case the other party.

The more people use the preview the more data the Skype team will have to improve the translation – Samuel Gibbs

 

Sunday 18th January 2015

Artificial Intelligence

In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work. There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist – Robert Mcmillan

 

Wednesday 11th February 2015

Artificial Intelligence

Behind much of the proliferation of AI startups are large companies such as Google, Microsoft Corp., and Amazon, which have quietly built up AI capabilities over the past decade to handle enormous sets of data and make predictions, like which ad someone is more likely to click on. Starting in the mid-2000s, the companies resurrected AI techniques developed in the 1980s, paired them with powerful computers and started making money.

Their efforts have resulted in products like Apple’s chirpy assistant Siri and Google’s self-driving cars. It has also spurred deal-making, with Facebook acquiring voice-recognition AI startup Wit.ai last month and Google buying DeepMind Technologies Ltd. in January 2014.

For Google, “the biggest thing will be artificial intelligence,” Chairman Eric Schmidt said last year in an interview with Bloomberg Television’s Emily Chang.

The AI boom has also been stoked by universities, which have noticed the commercial success of AI at places like Google and taken advantage of falling hardware costs to do more research and collaborate with closely held companies.

Last November, the University of California at San Francisco began working with Palo Alto, California-based MetaMind on two projects: one to spot prostate cancer and the other to predict what may happen to a patient after reaching a hospital’s intensive care unit so that staff can more quickly tailor their approach to the person – Jack Clark

 

Sunday 22nd February 2015

An AI Turning Point:  Beats Humans On Image Recognition Challenge

The Microsoft creation got a 4.94 percent error rate for the correct classification of images in the 2012 version of the widely recognized ImageNet data set , compared with a 5.1 percent error rate among humans, according to the paper.

The challenge involved identifying objects in the images and then correctly selecting the most accurate categories for the images, out of 1,000 options. Categories included “hatchet,” “geyser,” and “microwave.”

“To the best of our knowledge, our result surpasses for the first time the reported human-level performance on this visual recognition challenge,” Microsoft researchers Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun wrote in the paper, which is dated Feb. 6 – Jordan Novet

 

Monday 6th April 2015

The “Babel Fish” Universal Translator

Wired (Dec 2014) wrote: “Microsoft is already using some of the text translation technology underpinning Skype Translate to power its Bing Translate search engine translation service, and to jump start the foreign language translation of its products, manuals, and hundreds of thousands of support documents.”http://www.wired.com/2014/12/skype-used-ai-build-amazing-new-language-translator/

Live Science wrote (21 Mar 2015): “Ongoing research could eventually power machine translators that rival the fluidity of sci-fi translators, Google researcher Geoffrey Hinton suggested in a Reddit AMA— he likened the possibilities to those of the “Babel Fish” universal translator in Douglas Adam’s “Hitchhiker’s Guide to the Galaxy.” (In the book, the Babel Fish is a small leechlike fish inserted into the ear that provides instant, universal translation.)” http://www.livescience.com/50216-star-wars-artificial-intelligence-universal-translator.html

GigaOm (29 Jan 2015) quoted Geoffrey Hinton: “In few years time we will put it on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.” https://gigaom.com/2015/01/29/how-ai-can-help-build-a-universal-real-time-translator/

Singularity 2045

 

Monday 13th April 2015

What Would a World Without Language Barriers Look Like?

At Microsoft, hundreds of humans are trying to train a machine to listen, translate, and then speak.

Last December, the company announced the limited release of Skype Translator, which can translate a conversation between two people videochatting in different languages, in real time.

The software, which is still invite-only, can handle English, Spanish, Italian, and Mandarin. Google, too, has a smartphone app—free and available to the public—that can transcribe spoken text in one language, translate it, and then speak the result aloud in another. It’s not hard to imagine Google embedding this technology into its own videochat platform to a similar end.

Skype Translator will only get better; it depends on machine learning, a process that evaluates its own outputs and makes adjustments accordingly. It’s what has enabled mapping apps and Google searches to improve as more people use them, and the same will likely happen to live translation – Joe Pinsker

 

Monday 18th May 2015

Disruption of Healthcare

By 2025, existing healthcare institutions will be crushed as new business models with better and more efficient care emerge.

Thousands of startups, as well as today’s data giants (Google, Apple, Microsoft, SAP, IBM, etc.) will all enter this lucrative $3.8 trillion healthcare industry with new business models that dematerialize, demonetize and democratize today’s bureaucratic and inefficient system.

Biometric sensing (wearables) and AI will make each of us the CEOs of our own health. Large-scale genomic sequencing and machine learning will allow us to understand the root cause of cancer, heart disease and neurodegenerative disease and what to do about it. Robotic surgeons can carry out an autonomous surgical procedure perfectly (every time) for pennies on the dollar. Each of us will be able to regrow a heart, liver, lung or kidney when we need it, instead of waiting for the donor to die – Peter Diamandis

 

Wednesday 28th October 2015

Skype Update Offers Real-Time Voice Language Translation

Microsoft first released Skype Translator almost a year ago as a standalone app designed for Windows 8.

Early adopters have been providing regular feedback to Microsoft, and the company clearly feels it’s time to open up its Skype Translator to everyone using the Windows desktop app.

The software giant is integrating its impressive translation feature directly into the desktop version of Skype, opening it up to Windows 7, Windows 8, and Windows 10 users.

Six voice languages will be supported at launch, including English, French, German, Italian, Madarin, and Spanish. Skype will now let you hold a conversation in any of them, without ever needing to learn a language.

Microsoft will roll out an update to the Skype for Windows desktop app over the next few weeks, and a new translator button will show up within conversations. You can enable translation for audio and video calls, but if you just want to translate instant messages then 50 languages in total will be supported. – Tom Warren

 

Tuesday 29th December 2015

Building The Quantum Dream Machine

John Martinis has been researching how quantum computers could work for 30 years. Now he could be on the verge of finally making a useful one.

With his new Google lab up and running, Martinis guesses that he can demonstrate a small but useful quantum computer in two or three years. “We often say to each other that we’re in the process of giving birth to the quantum computer industry,” he says.

The new computer would let a Google coder run calculations in a coffee break that would take a supercomputer of today millions of years.

The software that Google has developed on ordinary computers to drive cars or answer questions could become vastly more intelligent. And earlier-stage ideas bubbling up at Google and its parent company, such as robots that can serve as emergency responders or software that can converse at a human level, might become real.

As recently as last week the prospect of a quantum computer doing anything useful within a few years seemed remote. Researchers in government, academic, and corporate labs were far from combining enough qubits to make even a simple proof-of-principle machine.

A well-funded Canadian startup called D-Wave Systems sold a few of what it called “the world’s first commercial quantum computers” but spent years failing to convince experts that the machines actually were doing what a quantum computer should.

Then NASA summoned journalists to building N-258 at its Ames Research Center in Mountain View, California, which since 2013 has hosted a D-Wave computer bought by Google.

There Hartmut Neven, who leads the Quantum Artificial Intelligence lab Google established to experiment with the D-Wave machine, unveiled the first real evidence that it can offer the power proponents of quantum computing have promised.

In a carefully designed test, the superconducting chip inside D-Wave’s computer—known as a quantum annealer—had performed 100 million times faster than a conventional processor.

However, this kind of advantage needs to be available in practical computing tasks, not just contrived tests. “We need to make it easier to take a problem that comes up at an engineer’s desk and put it into the computer,” said Neven.

That’s where Martinis comes in. Neven doesn’t think D-Wave can get a version of its quantum annealer ready to serve Google’s engineers quickly enough, so he hired Martinis to do it.

“It became clear that we can’t just wait,” Neven says. “There’s a list of shortcomings that need to be overcome in order to arrive at a real technology.”

He says the qubits on D-Wave’s chip are too unreliable and aren’t wired together thickly enough. (D-Wave’s CEO, Vern Brownell, responds that he’s not worried about competition from Google.)

Google will be competing not only with whatever improvements D-Wave can make, but also with Microsoft and IBM, which have substantial quantum computing projects of their own.

But those companies are focused on designs much further from becoming practically useful. Indeed, a rough internal time line for Google’s project estimates that Martinis’s group can make a quantum annealer with 100 qubits as soon as 2017.

The difficulty of creating qubits that are stable enough is the reason we don’t have quantum computers yet. But Martinis has been working on that for more than 11 years and thinks he’s nearly there.

The coherence time of his qubits, or the length of time they can maintain a superposition, is tens of microseconds—about 10,000 times the figure for those on D-Wave’s chip.

Martinis aims to show off a complete universal quantum computer with about 100 qubits around the same time he delivers Google’s new quantum annealer, in about two years.

He thinks that once he can get his qubits reliable enough to put 100 of them on a universal quantum chip, the path to combining many more will open up. “This is something we understand pretty well,” he says. “It’s hard to get coherence but easy to scale up.”

Figuring out how Martinis’s chips can make Google’s software less stupid falls to Neven.

He thinks that the prodigious power of qubits will narrow the gap between machine learning and biological learning—and remake the field of artificial intelligence. “Machine learning will be transformed into quantum learning,” he says. That could mean software that can learn from messier data, or from less data, or even without explicit instruction.

Neven muses that this kind of computational muscle could be the key to giving computers capabilities today limited to humans. “People talk about whether we can make creative machines–the most creative systems we can build will be quantum AI systems,” he says.

Neven pictures rows of superconducting chips lined up in data centers for Google engineers to access over the Internet relatively soon.

“I would predict that in 10 years there’s nothing but quantum machine learning–you don’t do the conventional way anymore,” he says.

A smiling Martinis warily accepts that vision. “I like that, but it’s hard,” he says. “He can say that, but I have to build it.” – Tom Simonite

 

Wednesday 17th February 2016

Viv’s Competition: Today’s Virtual Personal Assistants

Name: Siri
Company: Apple
Communication: Voice
The original personal assistant, launched on the iPhone in 2011 and incorporated into many Apple products. Siri can answer questions, send messages, place calls, make dinner reservations through OpenTable and more.

Name: Google Now
Company: Google
Communication: Voice and typing
Available through the Google app or Chrome browser, capabilities include answering questions, getting directions and creating reminders. It also proactively delivers information to users that it predicts they might want, such as traffic conditions during commutes.

Name: Cortana
Company: Microsoft
Communication: Voice
Built into Microsoft phones and Windows 10, Cortana will help you find things on your PC, manage your calendar and track packages. It also tells jokes.

Name: Alexa
Company: Amazon
Communication: Voice
Embedded inside Amazon’s Echo, the cylindrical speaker device that went on general sale in June 2015 in the US. Call on Alexa to stream music, give cooking assistance and reorder Amazon items.

Name: M
Company: Facebook
Communication: Typing
Released in August 2015 as a pilot and integrated into Facebook Messenger, M supports sophisticated interactions but behind the scenes relies on both artificial intelligence and humans to fulfil requests, though the idea is that eventually it will know enough to operate on its own.

Zoe Corbyn

 

Sunday 13th March 2016

Investing in Robotics and AI Companies

Here are some AI (and robotics) related companies to think about.

I’m not saying you should buy them (now) or sell for that matter, but they are definitely worth considering at the right valuations.

Think about becoming an owner of AI and robotics companies while there is still time. I plan to buy some of the most obvious ones (including Google) in the ongoing market downturn (2016-2017).

Top 5 most obvious AI companies

  • Alphabet (Google)
  • Facebook (M, Deep Learning)
  • IBM (Watson, neuromorphic chips)
  • Apple (Siri)
  • MSFT (skype RT lang, emo)
  • Amazon (customer prediction; link to old article)

Yes, I’m US centric. So sue me 🙂

Other

  • SAP (BI)
  • Oracle (BI)
  • Sony
  • Samsung
  • Twitter
  • Baidu
  • Alibaba
  • NEC
  • Nidec
  • Nuance (HHMM, speech)
  • Marketo
  • Opower
  • Nippon Ceramic
  • Pacific Industrial

Private companies (*I think):

  • *Mobvoi
  • *Scaled Inference
  • *Kensho
  • *Expect Labs
  • *Vicarious
  • *Nara Logics
  • *Context Relevant
  • *MetaMind
  • *Rethink Robotics
  • *Sentient Technologies
  • *MobileEye

General AI areas to consider when searching for AI companies

  • Self-driving cars
  • Language processing
  • Search agents
  • Image processing
  • Robotics
  • Machine learning
  • Experts
  • Oil and mineral exploration
  • Pharmaceutical research
  • Materials research
  • Computer chips (neuromorphic, memristors)
  • Energy, power utilities

Mikael Syding

 

Sunday 24th April 2016

A $2 Billion Chip to Accelerate Artificial Intelligence

Two years ago we were talking to 100 companies interested in using deep learning. This year we’re supporting 3,500. In two years there has been 35X growth. – Jen-Hsun Huang, CEO of Nvidia

The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.

Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning. This technique has produced recent major advances such as the Google software AlphaGo that defeated the world’s top Go player last month.

Deep learning involves passing data through large collections of crudely simulated neurons. The P100 could help deliver more breakthroughs by making it possible for computer scientists to feed more data to their artificial neural networks or to create larger collections of virtual neurons.

Artificial neural networks have been around for decades, but deep learning only became relevant in the last five years, after researchers figured out that chips originally designed to handle video-game graphics made the technique much more powerful. Graphics processors remain crucial for deep learning, but Nvidia CEO Jen-Hsun Huang says that it is now time to make chips customized for this use case.

At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang.

It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia’s previous best chip.

Deep-learning researchers from Facebook, Microsoft, and other companies that Nvidia granted early access to the new chip said they expect it to accelerate their progress by allowing them to work with larger collections of neurons.

“I think we’re going to be able to go quite a bit larger than we have been able to in the past, like 30 times bigger,” said Bryan Catanzero, who works on deep learning at the Chinese search company Baidu. Increasing the size of neural networks has previously enabled major jumps in the smartness of software. For example, last year Microsoft managed to make software that beats humans at recognizing objects in photos by creating a much larger neural network.

Huang of Nvidia said that the new chip is already in production and that he expects cloud-computing companies to start using it this year. IBM, Dell, and HP are expected to sell it inside servers starting next year. – Tom Simonite

 

Monday 23rd May 2016

European Commission’s Billion Euro Bet on Quantum Computing

A new €1 billion ($1.13 billion) project has been announced by the European Commission aimed at developing quantum technologies over the next 10 years and placing Europe at the forefront of “the second quantum revolution.”

Quantum computers have been hailed for their revolutionary potential in everything from space exploration to cancer treatment.

The Quantum Flagship announced will be similar in size, time scale and ambition as the EC’s other ongoing Flagship projects: the Graphene Flagship and the Human Brain Project. As well as quantum computers, the initiative will aim to address other aspects of quantum technologies, including quantum secure communication, quantum sensing and quantum simulation.

Since they were first theorized by the physicist Richard Feynman in 1982, quantum computers have promised to bring about a new era of ultra-powerful computing. One of the field’s pioneers, physicist David Deutsch, famously claimed that quantum computers hold the potential to solve problems that would take a classical computer longer than the age of the universe.

quantum computing computer d-wave EC

One of the main hopes of the initiative is that quantum technologies will make the leap from research labs to commercial and industrial applications. Matthias Troyer, a computational physics professor at the Institute for Theoretical Physics at ETH Zurich—one of the institutes set to benefit from the fund—believes the initiative acknowledges the fact that this step is now ready to be made.

“Quantum technologies have matured to the point where we are ready to transition from academic projects to the development of competitive commercial products that within the next decade will be able to perform tasks that classical devices are incapable of,” Troyer tells Newsweek.

This is a sentiment shared by Ilyas Khan, co-founder and CEO at Cambridge Quantum Computing (CQC). Khan claims that the Quantum Flagship puts Europe at the front of the race to build the world’s first quantum machines.

CQC has been one of the pioneers in early quantum computer research and in 2015 developed the first operating system capable of accurately simulating a quantum processor. The t|ket> operating system allows research teams to determine the type of operations a quantum computer can perform.

“It has become increasingly clear that it is now only a matter of a relatively short time before quantum technologies become of practical importance at the strategic level for governments and large corporations,” Khan says. – Anthony Cuthbertson

 

Thursday 18th August 2016

The AI Gold Rush

Companies are lining up to supply shovels to participants in the AI gold rush.

The name that comes up most frequently is NVIDIA (NASDAQ: NVDA), says Chris Dixon of Andreessen Horowitz; every AI startup seems to be using its GPU chips to train neural networks.

GPU capacity can also be rented in the cloud from Amazon (NASDAQ: AMZN) and Microsoft (NASDAQ: MSFT).

IBM (NYSE: IBM) and Google, meanwhile, are devising new chips specifically built to run AI software more quickly and efficiently.

And Google, Microsoft and IBM are making AI services such as speech recognition, sentence parsing and image analysis freely available online, allowing startups to combine such building blocks to form new AI products and services.

More than 300 companies from a range of industries have already built AI-powered apps using IBM’s Watson platform, says Guru Banavar of IBM, doing everything from filtering job candidates to picking wines. – The Economist

 

Friday 30th September 2016

Amazon Alexa

Image result for amazon alexa

Every once in a while, a product comes along that changes everyone’s expectations of what’s possible in user interfaces. The Mac. The World Wide Web. The iPhone.

Alexa belongs in that elite group of game changers.

Siri didn’t make it over the hump, despite the buzz it created.

Neither did Google Now or Cortana, despite their amazing capabilities and their progress in adoption. (Mary Meeker reports that 20% of Google searches on mobile are now done by voice.)

But Alexa has done so many things right that everyone else has missed that it is, to my mind, the first winning product of the conversational era. Google should be studying Alexa’s voice UI and emulating it.

Image result for alexa amazon

Human-Computer Interaction takes big leaps every once in a while. The next generation of speech interfaces is one of those leaps.

Humans are increasingly going to be interacting with devices that are able to listen to us and talk back (and increasingly, they are going to be able to see us as well, and to personalize their behavior based on who they recognize).

And they are going to get better and better at processing a wide range of ways of expressing intention, rather than limiting us to a single defined action like a touch, click, or swipe.

Alexa gives us a taste of the future, in the way that Google did around the turn of the millennium. We were still early in both the big data era and the cloud era, and Google was seen as an outlier, a company with a specialized product that, while amazing, seemed outside the mainstream of the industry. Within a few years, it WAS the mainstream, having changed the rules of the game forever.

What Alexa has shown us that rather than trying to boil the ocean with AI and conversational interfaces, what we need to do is to apply human design intelligence, break down the conversation into smaller domains where you can deliver satisfying results, and within those domains, spend a lot of time thinking through the “fit and finish” so that interfaces are intuitive, interactions are complete, and that what most people try to do “just works.” – Tim O’Reilly

 

Wednesday 19th October 2016

Google’s DeepMind Achieves Speech-Generation Breakthrough

Google’s DeepMind unit, which is working to develop super-intelligent computers, has created a system for machine-generated speech that it says outperforms existing technology by 50 percent.

U.K.-based DeepMind, which Google acquired for about 400 million pounds ($533 million) in 2014, developed an artificial intelligence called WaveNet that can mimic human speech by learning how to form the individual sound waves a human voice creates, it said in a blog post Friday.

In blind tests for U.S. English and Mandarin Chinese, human listeners found WaveNet-generated speech sounded more natural than that created with any of Google’s existing text-to-speech programs, which are based on different technologies. WaveNet still underperformed recordings of actual human speech.

Tech companies are likely to pay close attention to DeepMind’s breakthrough. Speech is becoming an increasingly important way humans interact with everything from mobile phones to cars. Amazon.com Inc., Apple Inc., Microsoft Inc. and Alphabet Inc.’s Google have all invested in personal digital assistants that primarily interact with users through speech.

Mark Bennett, the international director of Google Play, which sells Android apps, told an Android developer conference in London last week that 20 percent of mobile searches using Google are made by voice, not written text.

And while researchers have made great strides in getting computers to understand spoken language, their ability to talk back in ways that seem fully human has lagged. – Jeremy Kahn

 

Wednesday 30th November 2016

Building an AI Portfolio

The following stocks offer exposure to Artificial Intelligence. – Lee Banfield

——————————-

Google (NASDAQ: GOOGL)

Stock Price: $776

Market Cap: $531 billion

Image result for google deepmind

Healthcare Images – Google Deepmind

Machine Learning – GoogleML

Autonomous Systems – Google Self-driving Car

Hardware – GoogleTPU

Open Source Library – TensorFlow

IBM (NYSE: IBM)

Stock Price: $162

Market Cap: $154 billion

Image result for ibm watson

Enterprise Intelligence – IBM Watson

Healthcare – IBM Watson Health

Amazon (NASDAQ: AMZN)

Stock Price: $752

Market Cap: $355 billion

Image result for amazon alexa logo

Personal Assistant – Amazon Alexa

Open Source Library – DSSTNE

Microsoft (NASDAQ: MSFT)

Stock Price: $60

Market Cap: $473 billion

Image result for cortana logo

Personal Assistant – Cortana

Open Source Libraries – CNTK, AzureML, DMTK

Nvidia (NASDAQ: NVDA)

Stock Price: $94

Market Cap: $50 billion

Image result for nvidia

Hardware

Samsung (SSNLF:US)

Stock Price: $1,250

Market Cap: $176 billion

Image result for viv ai

Personal Assistant – Viv

Qualcomm (NASDAQ: QCOM)

Stock Price: $68

Market Cap: $100 billion

Image result for qualcomm logo

Hardware

Tesla (NASDAQ: TSLA)

Stock Price: $188

Market Cap: $29 billion

Image result for tesla logo

Autonomous Vehicles

Illumina (NASDAQ: ILMN)

Stock Price: $133

Market Cap: $20 billion

Image result for illumina grail

Healthcare, Cancer Detection – Grail

Mobileye (NYSE: MBLY)

Stock Price: $37

Market Cap: $8 billion

Image result for mobileye

Autonomous Vehicles

 

Wednesday 14th December 2016

OpenAI Releases Universe, an Open Source Platform for Training AI

Related image

  • If Universe works, computers will use video games to learn how to function in the physical world. – Wired
  • Universe is the most intricate and intriguing tech I’ve worked on. – Greg Brockman

OpenAI, the billion-dollar San Francisco artificial intelligence lab backed by Tesla CEO Elon Musk, just unveiled a new virtual world.

This isn’t a digital playground for humans. It’s a school for artificial intelligence. It’s a place where AI can learn to do just about anything.

Other AI labs have built similar worlds where AI agents can learn on their own. Researchers at the University of Alberta offer the Atari Learning Environment, where agents can learn to play old Atari games like Breakout and Space Invaders.

Microsoft offers Malmo, based on the game Minecraft. And earlier this month, Google’s DeepMind released an environment called DeepMind Lab.

But Universe is bigger than any of these. It’s an AI training ground that spans any software running on any machine, from games to web browsers to protein folders.

“The domain we chose is everything that a human can do with a computer,” says Greg Brockman, OpenAI’s chief technology officer.

In theory, AI researchers can plug any application into Universe, which then provides a common way for AI “agents” to interact with these applications. That means researchers can build bots that learn to navigate one application and then another and then another. – Cade Metz

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: