Apple AI

Wednesday 16th April 2014

Siri

Use it. Really, the more we use it the more it understands. It anticipates. That’s exactly what I’m beginning to find.

Why isn’t this technology used more frequently? Could it be that it’s just not advanced enough yet?

I put it more down to human nature. Our own procrastination if you like. We’re not taught to simply pick up Siri. So it’s people.

In what way can we expect to see the technology advance next? 

I think when it can actually start to reason with you. Let’s say you’re debating an appointment. Which one is most important? And it can actually debate that with you and give you a hypothesis as it were — why one could be preferred over the other.

The future of productivity lies in Artificial Intelligence (A.I.).- Laura Montini & Colin Lewis

 

Friday 25th April 2014

The Biggest Event in Human History

Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

There are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. – Stephen Hawking

 

Monday 29th December 2014

Skype’s Real Life Babel Fish Translates English/Spanish in Real Time

Microsoft has released its first preview of Skype Translator, which allows real-time conversations between spoken English and Spanish and will be extended to more languages.

It is now available as a free download for Windows 8.1, starting with spoken English and Spanish along with more than 40 text-based languages for instant messaging.

Gurdeep Pall, Microsoft’s corporate vice-president of Skype and Lync, said in a blog post that Skype Translator would “open up endless possibilities”, adding: “Skype Translator relies on machine learning, which means that the more the technology is used, the smarter it gets. We are starting with English and Spanish, and as more people use the Skype Translator preview with these languages, the quality will continually improve.”

Skype Translator is part of Microsoft’s artificial intelligence research relying on machine learning and deep neural networks, much like Google and Apple’s voice assistants. It can understand speech and then rapidly translate it into another language before using text-to-speech systems to speak the translation back to the user, or in this case the other party.

The more people use the preview the more data the Skype team will have to improve the translation – Samuel Gibbs

 

Wednesday 11th February 2015

Artificial Intelligence

Behind much of the proliferation of AI startups are large companies such as Google, Microsoft Corp., and Amazon, which have quietly built up AI capabilities over the past decade to handle enormous sets of data and make predictions, like which ad someone is more likely to click on. Starting in the mid-2000s, the companies resurrected AI techniques developed in the 1980s, paired them with powerful computers and started making money.

Their efforts have resulted in products like Apple’s chirpy assistant Siri and Google’s self-driving cars. It has also spurred deal-making, with Facebook acquiring voice-recognition AI startup Wit.ai last month and Google buying DeepMind Technologies Ltd. in January 2014.

For Google, “the biggest thing will be artificial intelligence,” Chairman Eric Schmidt said last year in an interview with Bloomberg Television’s Emily Chang.

The AI boom has also been stoked by universities, which have noticed the commercial success of AI at places like Google and taken advantage of falling hardware costs to do more research and collaborate with closely held companies.

Last November, the University of California at San Francisco began working with Palo Alto, California-based MetaMind on two projects: one to spot prostate cancer and the other to predict what may happen to a patient after reaching a hospital’s intensive care unit so that staff can more quickly tailor their approach to the person – Jack Clark

 

Wednesday 6th May 2015

Aritificial Intelligence: Viv

* Seed money came from the richest man in China and Gary Morgenthaler, the first investor in Siri. “I looked at the work they were doing,” Morgenthaler remembers, “and said, This is as good or better than anything I’ve seen in twenty years.”

* The solution was something they call the “planning objective function”. They created a program that could write its own code and find its own solutions. They named their invention Viv, after the Latin for “life.”

* They think they’re about six months from a beta test and a year from a public launch

———————————————————–

It’s a completely new concept for talking to machines and making them do our bidding—not just asking them for simple information but also making them think and react.

Right now, a founder named Adam Cheyer is controlling Viv from his computer. “I’m gonna start with a few simple queries,” Cheyer says, “then ramp it up a little bit.” He speaks a question out loud: “What’s the status of JetBlue 133?” A second later, Viv returns with an answer: “Late again, what else is new?”

To achieve this simple result, Viv went to an airline database called FlightStats.com and got the estimated arrival time and records that show JetBlue 133 is on time just 62 percent of the time.

Onscreen, for the demo, Viv’s reasoning is displayed in a series of boxes—and this is where things get really extraordinary, because you can see Viv begin to reason and solve problems on its own. For each problem it’s presented, Viv writes the program to find the solution. Presented with a question about flight status, Viv decided to dig out the historical record on its own.

Now let’s make it more interesting. “What’s the best available seat on Virgin 351 next Wednesday?”

Viv searches an airline-services distributor called Travelport, the back end for Expedia and Orbitz, and finds twenty-eight available seats. Then it goes to SeatGuru.com for information on individual seats per plane, and this is when Viv really starts to show off. Every time you use Viv, you teach it your personal preferences. These go into a private database linked with your profile, currently called “My Stuff,” which will be (they promise) under your complete control. So Cheyer is talking to his personal version of Viv, and it knows that he likes aisle seats and extra legroom. The solution is seat 9D, an economy-class exit-row seat with extra legroom – John H. Richardson

 

Monday 18th May 2015

Disruption of Healthcare

By 2025, existing healthcare institutions will be crushed as new business models with better and more efficient care emerge.

Thousands of startups, as well as today’s data giants (Google, Apple, Microsoft, SAP, IBM, etc.) will all enter this lucrative $3.8 trillion healthcare industry with new business models that dematerialize, demonetize and democratize today’s bureaucratic and inefficient system.

Biometric sensing (wearables) and AI will make each of us the CEOs of our own health. Large-scale genomic sequencing and machine learning will allow us to understand the root cause of cancer, heart disease and neurodegenerative disease and what to do about it. Robotic surgeons can carry out an autonomous surgical procedure perfectly (every time) for pennies on the dollar. Each of us will be able to regrow a heart, liver, lung or kidney when we need it, instead of waiting for the donor to die – Peter Diamandis

 

Monday 25th May 2015

Self-Driving Cars

Apple and Uber are developing their own self-driving cars.

Tesla intends to release a software update next month that will turn on “autopilot” mode, immediately allowing all Tesla Model S drivers to be driven betweenSan Francisco and Seattle without the driver doing anything, in Elon Musk’s own words.

Tesla-driven humans won’t be able to legally let their cars do all the driving, but who are we kidding? There will be Teslas driving themselves, saving lives in the process, and governments will need to catch up to make that driving legal.

This process is already here in 2015. So when will the process end? When will self-driving cars conquer our roads?

According to Morgan Stanley, complete autonomous capability will be here by 2022, followed by massive market penetration by 2026 and the cars we know and love today then entirely extinct in another 20 years thereafter – Scott Santens

 

Wednesday 9th September 2015

Is a Cambrian Explosion Coming for Robotics?

Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth.

In Cloud Robotics—a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows.

Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. Interestingly, Li (2014) noted that one of the robotic capabilities recently enabled by these combined technologies is vision—the same capability that may have played a leading role in the Cambrian Explosion.

How soon might a Cambrian Explosion of robotics occur? It is hard to tell.

The very fast improvement of Deep Learning has been surprising, even to experts in the field. The recent availability of large amounts of training data and computing resources on the cloud has made this possible; the algorithms being used have existed for some time and the learning process has actually become simpler as performance has improved.

The timing of tipping points is hard to predict, and exactly when an explosion in robotics capabilities will occur is not clear. Commercial investment in autonomy and robotics—including and especially in autonomous cars—has significantly accelerated, with high-profile firms like Amazon, Apple, Google, and Uber, as well as.

Human beings communicate externally with one another relatively slowly, at rates on the order of 10 bits per second. Robots, and computers in general, can communicate at rates over one gigabit per second—or roughly 100 million times faster. Based on this tremendous difference in external communication speeds, a combination of wireless and Internet communication can be exploited to share what is learned by every robot with all robots.

Human beings take decades to learn enough to add meaningfully to the compendium of common knowledge. However, robots not only stand on the shoulders of each other’s learning, but can start adding to the compendium of robot knowledge almost immediately after their creation.

The online repository of visually recorded objects and human activity is a tremendous resource that robots may soon exploit to improve their ability to understand and interact with the world, including interactions with human beings. Social media sites uploaded more than 1 trillion photos in 2013 and 2014 combined, and given the growth rate may upload another trillion in 2015.

The key problems in robot capability yet to be solved are those of generalizable knowledge representation and of cognition based on that representation. How can computer memories represent knowledge to be retrieved by memory-based methods so that similar but not identical situations will call up the appropriate memories and thoughts?

Significant cues are coming from the expanding understanding of the human brain, with the rate of understanding accelerating because of new brain imaging tools. Some machine learning algorithms, like the Deep Learning approached discussed earlier, are being applied in an attempt to discover generalizable representations automatically.

It is not clear how soon this problem will be solved. It may only be a few years until robots take off—or considerably longer. Robots are already making large strides in their abilities, but as the generalizable knowledge representation problem is addressed, the growth of robot capabilities will begin in earnest, and it will likely be explosive. The effects on economic output and human workers are certain to be profound. – Gill A. Pratt

 

Wednesday 20th January 2016

LEARNING / EDUCATION

Code School Udacity Promises Refunds If You Don’t Get a Job

Udacity, the online educational service founded by artificial intelligence guru and ex-Googler Sebastian Thrun, is offering a new set of tech degrees that guarantee a job in six months or your money back

The Silicon Valley-based startup is attaching this money-back guarantee to four of its online courses, courses designed to train machine learning engineers and software developers that build apps for Google Android devices, Apple iOS devices, and the web.

These online courses typically span about 9 months and require about 10 hours of study per week, and they’re priced at $299 a pop.

That’s about $100 above the company’s usual fee, but the idea is that students will also work closely with specialists that can help them prepare for interviews and find a job after their degree is complete. – Cade Metz

 

Wednesday 17th February 2016

Viv: Artificial Intelligence Virtual Personal Assistant

The company is working on what co-founder and CEo Dag Kittlaus describes as a “global brain” – a new form of voice-controlled virtual personal assistant.

With the odd flashes of personality, Viv will be able to perform thousands of tasks, and it won’t just be stuck in a phone but integrated into everything from fridges to cars. “Tell Viv what you want and it will orchestrate this massive network of services that will take care of it,” he says.

It is an ambitious project but Kittlaus isn’t without a track record. The last company he co-founded invented Siri, the original virtual assistant now standard in Apple products. Siri Inc was acquired by the tech giant for a reported $200m in 2010.

But, Kittlaus says, all these virtual assistants he helped birth are limited in their capabilities. Enter Viv. “What happens when you have a system that is 10,000 times more capable?” he asks. “It will shift the economics of the internet.”

Kittlaus pulls out his phone to demonstrate a prototype (he won’t say when Viv will launch but intimates that 2016 will be a big year). “I need a ride to the nearest pediatrician in San Jose,” he says to the phone. It produces a list of pediatricians sorted by distance and with their ratings courtesy of online doctor-booking service ZocDoc. Kittlaus taps one and the phone shows how far away the Uber is that could come and collect him. “If I click, there is going to be a car on the way,” he says. “See how those services just work together.”

He moves on to another example. “Send my mom a dozen yellow roses.” Viv can combine information in his contact list – where he has tagged his mother – with the services of an online florist that delivers across the US. Others requests Kittlaus says Viv will be able to accomplish include “On the way to my brother’s house, I need to pick up some good wine that goes well with lasagne” and “Find me a place to take my kids in the last week of March in the Caribbean”. Later I test out how well both Siri and Google’s virtual assistant perform on these examples. Neither gets far.

Viv can be different because it is being designed to be totally open, says Kittlaus. Any service, product or knowledge that any company or individual wants to imbue with a speaking component can be plugged into the network to work together with the others already in there. (Dozens of companies, from Uber to Florist One, are in the prototype). Other virtual assistants are essentially closed. Apple and only Apple, for example, decides what capabilities get integrated into Siri.

Viv’s biggest secret is the technology to bring the different services together on the fly to respond to requests for which it hasn’t been specifically programmed. “It is a program that writes its own program, which is the only way you can scale thousands of services working together that know nothing about one another,” says Kittlaus. Other personal assistants generally have their responses programmed by a developer. They are, essentially, scripted. There was no choice but to do things differently, says Kittlaus. To think of every combination of things that could be asked would be impossible. Viv will also include elements of learning; it will adapt as it comes to know your preferences.

Expect a phone app initially, says Kittlaus, but the loftier ambition is to incorporate Viv into all manner of devices, including cars. He imagines Viv’s icon becoming ubiquitous. “Anywhere you see it will mean you can talk to that thing,” he says. Of course this will require time: for companies to volunteer their services, and for users to come on board. But Kittlaus says some of the world’s largest consumer electronics companies are “very interested in plugging in”.

Viv has the potential to upend internet economics, says Kittlaus. Companies currently spend billions to advertise online with Google, and much traffic arrives based on web users’ keyword searches. But if instead requests are directed at Viv, it would cut out the middleman. The team are still exploring different business models, but one involves charging a processing fee on top of every transaction. – Zoe Corbyn

 

 

Wednesday 17th February 2016

Viv’s Competition: Today’s Virtual Personal Assistants

Name: Siri
Company: Apple
Communication: Voice
The original personal assistant, launched on the iPhone in 2011 and incorporated into many Apple products. Siri can answer questions, send messages, place calls, make dinner reservations through OpenTable and more.

Name: Google Now
Company: Google
Communication: Voice and typing
Available through the Google app or Chrome browser, capabilities include answering questions, getting directions and creating reminders. It also proactively delivers information to users that it predicts they might want, such as traffic conditions during commutes.

Name: Cortana
Company: Microsoft
Communication: Voice
Built into Microsoft phones and Windows 10, Cortana will help you find things on your PC, manage your calendar and track packages. It also tells jokes.

Name: Alexa
Company: Amazon
Communication: Voice
Embedded inside Amazon’s Echo, the cylindrical speaker device that went on general sale in June 2015 in the US. Call on Alexa to stream music, give cooking assistance and reorder Amazon items.

Name: M
Company: Facebook
Communication: Typing
Released in August 2015 as a pilot and integrated into Facebook Messenger, M supports sophisticated interactions but behind the scenes relies on both artificial intelligence and humans to fulfil requests, though the idea is that eventually it will know enough to operate on its own.

Zoe Corbyn

 

Sunday 13th March 2016

Investing in Robotics and AI Companies

Here are some AI (and robotics) related companies to think about.

I’m not saying you should buy them (now) or sell for that matter, but they are definitely worth considering at the right valuations.

Think about becoming an owner of AI and robotics companies while there is still time. I plan to buy some of the most obvious ones (including Google) in the ongoing market downturn (2016-2017).

Top 5 most obvious AI companies

  • Alphabet (Google)
  • Facebook (M, Deep Learning)
  • IBM (Watson, neuromorphic chips)
  • Apple (Siri)
  • MSFT (skype RT lang, emo)
  • Amazon (customer prediction; link to old article)

Yes, I’m US centric. So sue me 🙂

Other

  • SAP (BI)
  • Oracle (BI)
  • Sony
  • Samsung
  • Twitter
  • Baidu
  • Alibaba
  • NEC
  • Nidec
  • Nuance (HHMM, speech)
  • Marketo
  • Opower
  • Nippon Ceramic
  • Pacific Industrial

Private companies (*I think):

  • *Mobvoi
  • *Scaled Inference
  • *Kensho
  • *Expect Labs
  • *Vicarious
  • *Nara Logics
  • *Context Relevant
  • *MetaMind
  • *Rethink Robotics
  • *Sentient Technologies
  • *MobileEye

General AI areas to consider when searching for AI companies

  • Self-driving cars
  • Language processing
  • Search agents
  • Image processing
  • Robotics
  • Machine learning
  • Experts
  • Oil and mineral exploration
  • Pharmaceutical research
  • Materials research
  • Computer chips (neuromorphic, memristors)
  • Energy, power utilities

Mikael Syding

 

Saturday 18th June 2016

Elon Musk: We Are Less Than Two Years From Complete Car Autonomy

The Tesla CEO spoke at the Code Conference and predicted that we’re closer to self-driving cars than anybody thinks.

“I think we are less than two years away from complete autonomy, safer than humans, but regulations should take at least another year,” Musk said.

While many auto and tech companies–from Google to Uber and GM to Lyft and Apple to Ford–are researching and testing autonomous vehicles, Tesla seems on the verge of announcing that its Model 3 consumer sedan will have full self-driving capabilities.

Musk did not confirm that feature, but when asked multiple times on stage, he replied that there would be another Tesla event later in the year in which he would have more details.

The only thing he would say is that Tesla would do “the obvious thing”–seemingly a reference to a prior comment he made about autonomous driving being a must have feature for future vehicles. – Brian Soloman

 

Friday 30th September 2016

Amazon Alexa

Image result for amazon alexa

Every once in a while, a product comes along that changes everyone’s expectations of what’s possible in user interfaces. The Mac. The World Wide Web. The iPhone.

Alexa belongs in that elite group of game changers.

Siri didn’t make it over the hump, despite the buzz it created.

Neither did Google Now or Cortana, despite their amazing capabilities and their progress in adoption. (Mary Meeker reports that 20% of Google searches on mobile are now done by voice.)

But Alexa has done so many things right that everyone else has missed that it is, to my mind, the first winning product of the conversational era. Google should be studying Alexa’s voice UI and emulating it.

Image result for alexa amazon

Human-Computer Interaction takes big leaps every once in a while. The next generation of speech interfaces is one of those leaps.

Humans are increasingly going to be interacting with devices that are able to listen to us and talk back (and increasingly, they are going to be able to see us as well, and to personalize their behavior based on who they recognize).

And they are going to get better and better at processing a wide range of ways of expressing intention, rather than limiting us to a single defined action like a touch, click, or swipe.

Alexa gives us a taste of the future, in the way that Google did around the turn of the millennium. We were still early in both the big data era and the cloud era, and Google was seen as an outlier, a company with a specialized product that, while amazing, seemed outside the mainstream of the industry. Within a few years, it WAS the mainstream, having changed the rules of the game forever.

What Alexa has shown us that rather than trying to boil the ocean with AI and conversational interfaces, what we need to do is to apply human design intelligence, break down the conversation into smaller domains where you can deliver satisfying results, and within those domains, spend a lot of time thinking through the “fit and finish” so that interfaces are intuitive, interactions are complete, and that what most people try to do “just works.” – Tim O’Reilly

 

Wednesday 19th October 2016

Samsung Acquires AI firm Viv Labs, run by Siri co-creator

Image result for viv ai

Samsung said in a statement it plans to integrate the San Jose-based company’s AI platform, called Viv, into the Galaxy smartphones and expand voice-assistant services to home appliances and wearable technology devices.

Technology firms are locked in an increasingly heated race to make AI good enough to let consumers interact with their devices more naturally, especially via voice.

“Viv brings in a very unique technology to allow us to have an open system where any third-party service and content providers (can) add their services to our devices’ interfaces,” Rhee In-jong, Samsung‘s executive vice president, told Reuters in an interview.

The executive said Samsung needs to “really revolutionise” how its devices operate, moving towards using voice rather than simply touch. “We can’t innovate using only in-house technology,” Rhee said.

Viv chief executive and co-founder Dag Kittlaus, a Siri co-creator, and other top managers at the firm will continue managing the business independently following the acquisition. Rhee told Reuters Samsung will continue to look for acquisitions to bolster its AI and other software capabilities, without naming any targets. – Se Young Lee

 

Wednesday 19th October 2016

Google’s DeepMind Achieves Speech-Generation Breakthrough

Google’s DeepMind unit, which is working to develop super-intelligent computers, has created a system for machine-generated speech that it says outperforms existing technology by 50 percent.

U.K.-based DeepMind, which Google acquired for about 400 million pounds ($533 million) in 2014, developed an artificial intelligence called WaveNet that can mimic human speech by learning how to form the individual sound waves a human voice creates, it said in a blog post Friday.

In blind tests for U.S. English and Mandarin Chinese, human listeners found WaveNet-generated speech sounded more natural than that created with any of Google’s existing text-to-speech programs, which are based on different technologies. WaveNet still underperformed recordings of actual human speech.

Tech companies are likely to pay close attention to DeepMind’s breakthrough. Speech is becoming an increasingly important way humans interact with everything from mobile phones to cars. Amazon.com Inc., Apple Inc., Microsoft Inc. and Alphabet Inc.’s Google have all invested in personal digital assistants that primarily interact with users through speech.

Mark Bennett, the international director of Google Play, which sells Android apps, told an Android developer conference in London last week that 20 percent of mobile searches using Google are made by voice, not written text.

And while researchers have made great strides in getting computers to understand spoken language, their ability to talk back in ways that seem fully human has lagged. – Jeremy Kahn

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: