Monday 24th February 2014
Speak in English, they hear in Chinese in your voice, and vice versa. Demo already live
Today, translation is happening in textboxes. But wait till you click someone’s Oculus avatar to suddenly hear them speaking your tongue. You might even imagine a programmable Jawbone headset which would allow the same realtime translation to happen in the real world. – Balaji S. Srinivasan, Partner at Andreessen Horowitz
Wednesday 20th August 2014
The Quest to Build an Artificial Brain
Deep learning has suddenly spread across the commercial tech world, from Google to Microsoft to Baidu to Twitter, just a few years after most AI researchers openly scoffed at it.
All of these tech companies are now exploring a particular type of deep learning called convolutional neural networks, aiming to build web services that can do things like automatically understand natural language and recognize images. At Google, “convnets” power the voice recognition system available on Android phones. At China’s Baidu, they drive a new visual search engine.
But this is just a start. The deep learning community are working to improve the technology. Today’s most widely used convolutional neural nets rely almost exclusively on supervised learning. Basically, that means that if you want it to learn how to identify a particular object, you have to label more than a few examples. Yet unsupervised learning—or learning from unlabeled data—is closer to how real brains learn, and some deep learning research is exploring this area.
“How this is done in the brain is pretty much completely unknown. Synapses adjust themselves, but we don’t have a clear picture for what the algorithm of the cortex is,” says Yann LeCun of Facebook. “We know the ultimate answer is unsupervised learning, but we don’t have the answer yet.” – Daniela Hernandez
Sunday 18th January 2015
In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.
AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.
“Things like computer vision are starting to work; speech recognition is starting to work. There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist – Robert Mcmillan
Wednesday 11th February 2015
Behind much of the proliferation of AI startups are large companies such as Google, Microsoft Corp., and Amazon, which have quietly built up AI capabilities over the past decade to handle enormous sets of data and make predictions, like which ad someone is more likely to click on. Starting in the mid-2000s, the companies resurrected AI techniques developed in the 1980s, paired them with powerful computers and started making money.
Their efforts have resulted in products like Apple’s chirpy assistant Siri and Google’s self-driving cars. It has also spurred deal-making, with Facebook acquiring voice-recognition AI startup Wit.ai last month and Google buying DeepMind Technologies Ltd. in January 2014.
For Google, “the biggest thing will be artificial intelligence,” Chairman Eric Schmidt said last year in an interview with Bloomberg Television’s Emily Chang.
The AI boom has also been stoked by universities, which have noticed the commercial success of AI at places like Google and taken advantage of falling hardware costs to do more research and collaborate with closely held companies.
Last November, the University of California at San Francisco began working with Palo Alto, California-based MetaMind on two projects: one to spot prostate cancer and the other to predict what may happen to a patient after reaching a hospital’s intensive care unit so that staff can more quickly tailor their approach to the person – Jack Clark
Wednesday 17th February 2016
Viv’s Competition: Today’s Virtual Personal Assistants
The original personal assistant, launched on the iPhone in 2011 and incorporated into many Apple products. Siri can answer questions, send messages, place calls, make dinner reservations through OpenTable and more.
Name: Google Now
Communication: Voice and typing
Available through the Google app or Chrome browser, capabilities include answering questions, getting directions and creating reminders. It also proactively delivers information to users that it predicts they might want, such as traffic conditions during commutes.
Built into Microsoft phones and Windows 10, Cortana will help you find things on your PC, manage your calendar and track packages. It also tells jokes.
Embedded inside Amazon’s Echo, the cylindrical speaker device that went on general sale in June 2015 in the US. Call on Alexa to stream music, give cooking assistance and reorder Amazon items.
Released in August 2015 as a pilot and integrated into Facebook Messenger, M supports sophisticated interactions but behind the scenes relies on both artificial intelligence and humans to fulfil requests, though the idea is that eventually it will know enough to operate on its own.
Wednesday 17th February 2016
Google Achieves AI ‘Breakthrough’ by Beating Go Champion
The Chinese game is viewed as a much tougher challenge than chess for computers because there are many more ways a Go match can play out.
Earlier on Wednesday, Facebook’s chief executive had said its own AI project had been “getting close” to beating humans at Go.
DeepMind’s chief executive, Demis Hassabis, said its AlphaGo software followed a three-stage process, which began with making it analyse 30 million moves from games played by humans.
It learns what patterns generally occur – what sort are good and what sort are bad. If you like, that’s the part of the program that learns the intuitive part of Go.
“It now plays different versions of itself millions and millions of times, and each time it gets incrementally better. It learns from its mistakes.
“The final step is known as the Monte Carlo Tree Search, which is really the planning stage.
“Now it has all the intuitive knowledge about which positions are good in Go, it can make long-range plans.”
“Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years,” Mr Hassabis said.
“The reasons it was quicker than people expected was the pace of the innovation going on with the underlying algorithms and also how much more potential you can get by combining different algorithms together.”
Prof Zoubin Ghahramani, of the University of Cambridge, said: “This is certainly a major breakthrough for AI, with wider implications.
“The technical idea that underlies it is the idea of reinforcement learning – getting computers to learn to improve their behaviour to achieve goals. That could be used for decision-making problems – to help doctors make treatment plans, for example, in businesses or anywhere where you’d like to have computers assist humans in decision making.”
DeepMind now intends to pit AlphaGo against Lee Sedol – the world’s top Go player – in Seoul in March.
“For us, Go is the pinnacle of board game challenges,” said Mr Hassabis. “Now, we are moving towards 3D games or simulations that are much more like the real world rather than the Atari games we tackled last year.” – BBC News
Sunday 13th March 2016
Investing in Robotics and AI Companies
Here are some AI (and robotics) related companies to think about.
I’m not saying you should buy them (now) or sell for that matter, but they are definitely worth considering at the right valuations.
Think about becoming an owner of AI and robotics companies while there is still time. I plan to buy some of the most obvious ones (including Google) in the ongoing market downturn (2016-2017).
Top 5 most obvious AI companies
- Alphabet (Google)
- Facebook (M, Deep Learning)
- IBM (Watson, neuromorphic chips)
- Apple (Siri)
- MSFT (skype RT lang, emo)
- Amazon (customer prediction; link to old article)
Yes, I’m US centric. So sue me 🙂
- SAP (BI)
- Oracle (BI)
- Nuance (HHMM, speech)
- Nippon Ceramic
- Pacific Industrial
Private companies (*I think):
- *Scaled Inference
- *Expect Labs
- *Nara Logics
- *Context Relevant
- *Rethink Robotics
- *Sentient Technologies
General AI areas to consider when searching for AI companies
- Self-driving cars
- Language processing
- Search agents
- Image processing
- Machine learning
- Oil and mineral exploration
- Pharmaceutical research
- Materials research
- Computer chips (neuromorphic, memristors)
- Energy, power utilities
Sunday 24th April 2016
AI Hits the Mainstream
Insurance, finance, manufacturing, oil and gas, auto manufacturing, health care: these may not be the industries that first spring to mind when you think of artificial intelligence. But as technology companies like Google and Baidu build labs and pioneer advances in the field, a broader group of industries are beginning to investigate how AI can work for them, too.
Today the industry selling AI software and services remains a small one. Dave Schubmehl, research director at IDC, calculates that sales for all companies selling cognitive software platforms —excluding companies like Google and Facebook, which do research for their own use—added up to $1 billion last year.
He predicts that by 2020 that number will exceed $10 billion. Other than a few large players like IBM and Palantir Technologies, AI remains a market of startups: 2,600 companies, by Bloomberg’s count.
General Electric is using AI to improve service on its highly engineered jet engines. By combining a form of AI called computer vision (originally developed to categorize movies and TV footage when GE owned NBC Universal) with CAD drawings and data from cameras and infrared detectors, GE has improved its detection of cracks and other problems in airplane engine blades.
The system eliminates errors common to traditional human reviews, such as a dip in detections on Fridays and Mondays, but also relies on human experts to confirm its alerts. The program then learns from that feedback, says Colin Parris, GE’s vice president of software research. – Nanette Byrnes
Sunday 24th April 2016
A $2 Billion Chip to Accelerate Artificial Intelligence
Two years ago we were talking to 100 companies interested in using deep learning. This year we’re supporting 3,500. In two years there has been 35X growth. – Jen-Hsun Huang, CEO of Nvidia
The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.
Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning. This technique has produced recent major advances such as the Google software AlphaGo that defeated the world’s top Go player last month.
Deep learning involves passing data through large collections of crudely simulated neurons. The P100 could help deliver more breakthroughs by making it possible for computer scientists to feed more data to their artificial neural networks or to create larger collections of virtual neurons.
Artificial neural networks have been around for decades, but deep learning only became relevant in the last five years, after researchers figured out that chips originally designed to handle video-game graphics made the technique much more powerful. Graphics processors remain crucial for deep learning, but Nvidia CEO Jen-Hsun Huang says that it is now time to make chips customized for this use case.
At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang.
It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia’s previous best chip.
Deep-learning researchers from Facebook, Microsoft, and other companies that Nvidia granted early access to the new chip said they expect it to accelerate their progress by allowing them to work with larger collections of neurons.
“I think we’re going to be able to go quite a bit larger than we have been able to in the past, like 30 times bigger,” said Bryan Catanzero, who works on deep learning at the Chinese search company Baidu. Increasing the size of neural networks has previously enabled major jumps in the smartness of software. For example, last year Microsoft managed to make software that beats humans at recognizing objects in photos by creating a much larger neural network.
Huang of Nvidia said that the new chip is already in production and that he expects cloud-computing companies to start using it this year. IBM, Dell, and HP are expected to sell it inside servers starting next year. – Tom Simonite
Monday 23rd May 2016
Artificial Intelligence Better Than Humans at Cancer Detection
- Machines are now better than humans at detecting cancer both in pictures and in free text documents. What’s next? – Sprezzaturian
- And so it begins: convolutional nets built into ultrasound machine to help detect breast cancer (Samsung Medison unveils deep learning-based breast ultrasound imaging device) – Yann LeCun, Director of AI Research, Facebook
Researchers from the Regenstrief Institute and Indiana University School of Informatics and Computing say they’ve found that open-source machine learning tools are as good as — or better than — humans in extracting crucial meaning from free-text (unstructured) pathology reports and detecting cancer cases.
The computer tools are also faster and less resource-intensive.
“We think that its no longer necessary for humans to spend time reviewing text reports to determine if cancer is present or not,” said study senior author Shaun Grannis*, M.D., M.S., interim director of the Regenstrief Center of Biomedical Informatics.
“We have come to the point in time that technology can handle this. A human’s time is better spent helping other humans by providing them with better clinical care. Everything — physician practices, health care systems, health information exchanges, insurers, as well as public health departments — are awash in oceans of data. How can we hope to make sense of this deluge of data? Humans can’t do it — but computers can.”
“This is a major infrastructure advance — we have the technology, we have the data, we have the software from which we saw accurate, rapid review of vast amounts of data without human oversight or supervision.” – Kurzweil AI