Wednesday 16th April 2014
Use it. Really, the more we use it the more it understands. It anticipates. That’s exactly what I’m beginning to find.
Why isn’t this technology used more frequently? Could it be that it’s just not advanced enough yet?
I put it more down to human nature. Our own procrastination if you like. We’re not taught to simply pick up Siri. So it’s people.
In what way can we expect to see the technology advance next?
I think when it can actually start to reason with you. Let’s say you’re debating an appointment. Which one is most important? And it can actually debate that with you and give you a hypothesis as it were — why one could be preferred over the other.
Friday 25th April 2014
The Biggest Event in Human History
Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.
There are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. – Stephen Hawking
Monday 19th May 2014
5 Areas in Robotics that will Transform Society and Their Economic Impact
1- Drones: The next 5 years for drones is very promising. Expect to see drones becoming part of society’s information infrastructure as News agencies, TV companies, photographers, real estate agents, moviemakers, industrial giants, pizza deliveries, logistic companies, local governments, agriculture and others embrace drone technology
2 – Medical Prodcedures & Operations: IBM’s Watson may become the best diagnostician in the world and be greatly in demand contributing billions to IBM’s sales whilst potentially saving millions of lives. The global medical robotic systems market was worth $5.48 billion in 2011 and is expected to reach $13.6 billion in 2018, growing at a compounded annual growth rate of 12.6% from 2012. Surgical robots are expected to enjoy the largest revenue share.
3 – Robotic Prosthetics & Exoskeletons: The economic market is currently quite small, somewhere around $100 to $150 million, however with the recent advances of prosthetics and exoskeletons it is expected to grow considerably to over $1.5 billion in the next 3 to 5 years and higher still thereafter
4 – Artificial Assistants: This domain has the largest possible early impact on the largest number of people. Artificial Intelligence pioneers such as Google Director of Engineering Ray Kurzweill have indicated anyone with a smartphone or tablet will be using ‘cognitive assistants’ by 2017
5 – Driverless Cars: Autonomous vehicles, including the iconic Google self-driving cars, will be on the road commercially before 2018. The long-term impact on society of self-driving cars and other autonomous vehicles will be a radical change in how we commute. There will also likely be a sharp reduction in traffic accidents, the majority of which are caused by human error
Monday 23rd June 2014
Tim Tuttle, CEO and founder of Expect Labs, said in the last 18 months, voice recognition accuracy improved 30%—a bigger gain than the entire decade previous. A third of searches are now being done using voice commands.
Voice recognition uses machine learning algorithms that depend on people actually using them to get better. Tuttle believes we’re at the beginning of a virtuous cycle wherein wider adoption is yielding more data; more data translates into better performance; better performance results in wider adoption, more data, and so on – Jason Dorrier
Monday 29th December 2014
Skype’s Real Life Babel Fish Translates English/Spanish in Real Time
Microsoft has released its first preview of Skype Translator, which allows real-time conversations between spoken English and Spanish and will be extended to more languages.
It is now available as a free download for Windows 8.1, starting with spoken English and Spanish along with more than 40 text-based languages for instant messaging.
Gurdeep Pall, Microsoft’s corporate vice-president of Skype and Lync, said in a blog post that Skype Translator would “open up endless possibilities”, adding: “Skype Translator relies on machine learning, which means that the more the technology is used, the smarter it gets. We are starting with English and Spanish, and as more people use the Skype Translator preview with these languages, the quality will continually improve.”
Skype Translator is part of Microsoft’s artificial intelligence research relying on machine learning and deep neural networks, much like Google and Apple’s voice assistants. It can understand speech and then rapidly translate it into another language before using text-to-speech systems to speak the translation back to the user, or in this case the other party.
The more people use the preview the more data the Skype team will have to improve the translation – Samuel Gibbs
Saturday 24th January 2015
Dr. Kurzweil’s current predictions include:
1. self driving cars by 2017
2. personal assistant search engines by 2019
3. switching off our fat cells by 2020
4. full immersive virtual realities by 2023
5. 100 percent energy from solar by 2033
Dr. Kurzweil predicts that growth in the 3 areas — genetics, nanotechnology and robotics (GNR) — will be the basis of the singularity. In his book The Singularity Is Near he says, “It will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots. There will be no distinction, post singularity, between human and machine or between physical and virtual reality.” – Lucy Flores
Wednesday 11th February 2015
Behind much of the proliferation of AI startups are large companies such as Google, Microsoft Corp., and Amazon, which have quietly built up AI capabilities over the past decade to handle enormous sets of data and make predictions, like which ad someone is more likely to click on. Starting in the mid-2000s, the companies resurrected AI techniques developed in the 1980s, paired them with powerful computers and started making money.
Their efforts have resulted in products like Apple’s chirpy assistant Siri and Google’s self-driving cars. It has also spurred deal-making, with Facebook acquiring voice-recognition AI startup Wit.ai last month and Google buying DeepMind Technologies Ltd. in January 2014.
For Google, “the biggest thing will be artificial intelligence,” Chairman Eric Schmidt said last year in an interview with Bloomberg Television’s Emily Chang.
The AI boom has also been stoked by universities, which have noticed the commercial success of AI at places like Google and taken advantage of falling hardware costs to do more research and collaborate with closely held companies.
Last November, the University of California at San Francisco began working with Palo Alto, California-based MetaMind on two projects: one to spot prostate cancer and the other to predict what may happen to a patient after reaching a hospital’s intensive care unit so that staff can more quickly tailor their approach to the person – Jack Clark
Wednesday 6th May 2015
Aritificial Intelligence: Viv
* Seed money came from the richest man in China and Gary Morgenthaler, the first investor in Siri. “I looked at the work they were doing,” Morgenthaler remembers, “and said, This is as good or better than anything I’ve seen in twenty years.”
* The solution was something they call the “planning objective function”. They created a program that could write its own code and find its own solutions. They named their invention Viv, after the Latin for “life.”
* They think they’re about six months from a beta test and a year from a public launch
It’s a completely new concept for talking to machines and making them do our bidding—not just asking them for simple information but also making them think and react.
Right now, a founder named Adam Cheyer is controlling Viv from his computer. “I’m gonna start with a few simple queries,” Cheyer says, “then ramp it up a little bit.” He speaks a question out loud: “What’s the status of JetBlue 133?” A second later, Viv returns with an answer: “Late again, what else is new?”
To achieve this simple result, Viv went to an airline database called FlightStats.com and got the estimated arrival time and records that show JetBlue 133 is on time just 62 percent of the time.
Onscreen, for the demo, Viv’s reasoning is displayed in a series of boxes—and this is where things get really extraordinary, because you can see Viv begin to reason and solve problems on its own. For each problem it’s presented, Viv writes the program to find the solution. Presented with a question about flight status, Viv decided to dig out the historical record on its own.
Now let’s make it more interesting. “What’s the best available seat on Virgin 351 next Wednesday?”
Viv searches an airline-services distributor called Travelport, the back end for Expedia and Orbitz, and finds twenty-eight available seats. Then it goes to SeatGuru.com for information on individual seats per plane, and this is when Viv really starts to show off. Every time you use Viv, you teach it your personal preferences. These go into a private database linked with your profile, currently called “My Stuff,” which will be (they promise) under your complete control. So Cheyer is talking to his personal version of Viv, and it knows that he likes aisle seats and extra legroom. The solution is seat 9D, an economy-class exit-row seat with extra legroom – John H. Richardson
Wednesday 17th February 2016
Viv: Artificial Intelligence Virtual Personal Assistant
The company is working on what co-founder and CEo Dag Kittlaus describes as a “global brain” – a new form of voice-controlled virtual personal assistant.
With the odd flashes of personality, Viv will be able to perform thousands of tasks, and it won’t just be stuck in a phone but integrated into everything from fridges to cars. “Tell Viv what you want and it will orchestrate this massive network of services that will take care of it,” he says.
It is an ambitious project but Kittlaus isn’t without a track record. The last company he co-founded invented Siri, the original virtual assistant now standard in Apple products. Siri Inc was acquired by the tech giant for a reported $200m in 2010.
But, Kittlaus says, all these virtual assistants he helped birth are limited in their capabilities. Enter Viv. “What happens when you have a system that is 10,000 times more capable?” he asks. “It will shift the economics of the internet.”
Kittlaus pulls out his phone to demonstrate a prototype (he won’t say when Viv will launch but intimates that 2016 will be a big year). “I need a ride to the nearest pediatrician in San Jose,” he says to the phone. It produces a list of pediatricians sorted by distance and with their ratings courtesy of online doctor-booking service ZocDoc. Kittlaus taps one and the phone shows how far away the Uber is that could come and collect him. “If I click, there is going to be a car on the way,” he says. “See how those services just work together.”
He moves on to another example. “Send my mom a dozen yellow roses.” Viv can combine information in his contact list – where he has tagged his mother – with the services of an online florist that delivers across the US. Others requests Kittlaus says Viv will be able to accomplish include “On the way to my brother’s house, I need to pick up some good wine that goes well with lasagne” and “Find me a place to take my kids in the last week of March in the Caribbean”. Later I test out how well both Siri and Google’s virtual assistant perform on these examples. Neither gets far.
Viv can be different because it is being designed to be totally open, says Kittlaus. Any service, product or knowledge that any company or individual wants to imbue with a speaking component can be plugged into the network to work together with the others already in there. (Dozens of companies, from Uber to Florist One, are in the prototype). Other virtual assistants are essentially closed. Apple and only Apple, for example, decides what capabilities get integrated into Siri.
Viv’s biggest secret is the technology to bring the different services together on the fly to respond to requests for which it hasn’t been specifically programmed. “It is a program that writes its own program, which is the only way you can scale thousands of services working together that know nothing about one another,” says Kittlaus. Other personal assistants generally have their responses programmed by a developer. They are, essentially, scripted. There was no choice but to do things differently, says Kittlaus. To think of every combination of things that could be asked would be impossible. Viv will also include elements of learning; it will adapt as it comes to know your preferences.
Expect a phone app initially, says Kittlaus, but the loftier ambition is to incorporate Viv into all manner of devices, including cars. He imagines Viv’s icon becoming ubiquitous. “Anywhere you see it will mean you can talk to that thing,” he says. Of course this will require time: for companies to volunteer their services, and for users to come on board. But Kittlaus says some of the world’s largest consumer electronics companies are “very interested in plugging in”.
Viv has the potential to upend internet economics, says Kittlaus. Companies currently spend billions to advertise online with Google, and much traffic arrives based on web users’ keyword searches. But if instead requests are directed at Viv, it would cut out the middleman. The team are still exploring different business models, but one involves charging a processing fee on top of every transaction. – Zoe Corbyn
Wednesday 17th February 2016
Viv’s Competition: Today’s Virtual Personal Assistants
The original personal assistant, launched on the iPhone in 2011 and incorporated into many Apple products. Siri can answer questions, send messages, place calls, make dinner reservations through OpenTable and more.
Name: Google Now
Communication: Voice and typing
Available through the Google app or Chrome browser, capabilities include answering questions, getting directions and creating reminders. It also proactively delivers information to users that it predicts they might want, such as traffic conditions during commutes.
Built into Microsoft phones and Windows 10, Cortana will help you find things on your PC, manage your calendar and track packages. It also tells jokes.
Embedded inside Amazon’s Echo, the cylindrical speaker device that went on general sale in June 2015 in the US. Call on Alexa to stream music, give cooking assistance and reorder Amazon items.
Released in August 2015 as a pilot and integrated into Facebook Messenger, M supports sophisticated interactions but behind the scenes relies on both artificial intelligence and humans to fulfil requests, though the idea is that eventually it will know enough to operate on its own.
Sunday 13th March 2016
DeepMind Smartphone Assistant
The movie Her is just an easy popular mainstream view of what that sort of thing is. We would like these smartphone assistant things to actually be smart and contextual and have a deeper understanding of what you’re trying to do.
At the moment most of these systems are extremely brittle — once you go off the templates that have been pre-programmed then they’re pretty useless. So it’s about making that actually adaptable and flexible and more robust.
It’s this dichotomy between pre-programmed and learnt. At the moment pretty much all smartphone assistants are special-cased and pre-programmed and that means they’re brittle because they can only do the things they were pre-programmed for. And the real world’s very messy and complicated and users do all sorts of unpredictable things that you can’t know ahead of time.
Our belief at DeepMind, certainly this was the founding principle, is that the only way to do intelligence is to do learning from the ground up and be general.
I think in the next two to three years you’ll start seeing it. I mean, it’ll be quite subtle to begin with, certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities. – Demis Hassabis
Friday 30th September 2016
Every once in a while, a product comes along that changes everyone’s expectations of what’s possible in user interfaces. The Mac. The World Wide Web. The iPhone.
Alexa belongs in that elite group of game changers.
Siri didn’t make it over the hump, despite the buzz it created.
Neither did Google Now or Cortana, despite their amazing capabilities and their progress in adoption. (Mary Meeker reports that 20% of Google searches on mobile are now done by voice.)
But Alexa has done so many things right that everyone else has missed that it is, to my mind, the first winning product of the conversational era. Google should be studying Alexa’s voice UI and emulating it.
Human-Computer Interaction takes big leaps every once in a while. The next generation of speech interfaces is one of those leaps.
Humans are increasingly going to be interacting with devices that are able to listen to us and talk back (and increasingly, they are going to be able to see us as well, and to personalize their behavior based on who they recognize).
And they are going to get better and better at processing a wide range of ways of expressing intention, rather than limiting us to a single defined action like a touch, click, or swipe.
Alexa gives us a taste of the future, in the way that Google did around the turn of the millennium. We were still early in both the big data era and the cloud era, and Google was seen as an outlier, a company with a specialized product that, while amazing, seemed outside the mainstream of the industry. Within a few years, it WAS the mainstream, having changed the rules of the game forever.
What Alexa has shown us that rather than trying to boil the ocean with AI and conversational interfaces, what we need to do is to apply human design intelligence, break down the conversation into smaller domains where you can deliver satisfying results, and within those domains, spend a lot of time thinking through the “fit and finish” so that interfaces are intuitive, interactions are complete, and that what most people try to do “just works.” – Tim O’Reilly
Wednesday 19th October 2016
Samsung Acquires AI firm Viv Labs, run by Siri co-creator
Samsung said in a statement it plans to integrate the San Jose-based company’s AI platform, called Viv, into the Galaxy smartphones and expand voice-assistant services to home appliances and wearable technology devices.
Technology firms are locked in an increasingly heated race to make AI good enough to let consumers interact with their devices more naturally, especially via voice.
“Viv brings in a very unique technology to allow us to have an open system where any third-party service and content providers (can) add their services to our devices’ interfaces,” Rhee In-jong, Samsung‘s executive vice president, told Reuters in an interview.
The executive said Samsung needs to “really revolutionise” how its devices operate, moving towards using voice rather than simply touch. “We can’t innovate using only in-house technology,” Rhee said.
Viv chief executive and co-founder Dag Kittlaus, a Siri co-creator, and other top managers at the firm will continue managing the business independently following the acquisition. Rhee told Reuters Samsung will continue to look for acquisitions to bolster its AI and other software capabilities, without naming any targets. – Se Young Lee