Artificial Intelligence

Monday 24th March 2014

The Singularity

The advances we’ve seen in the past few years—cars that drive themselves, useful humanoid robots, speech recognition and synthesis systems, 3D printers, Jeopardy!-champion computers—are not the crowning achievements of the computer era. They’re the warm-up acts. As we move deeper into the second machine age we’ll see more and more such wonders, and they’ll become more and more impressive.

How can we be so sure? Because the exponential, digital, and recombinant powers of the second machine age have made it possible for humanity to create two of the most important one-time events in our history: the emergence of real, useful artificial intelligence (AI) and the connection of most of the people on the planet via a common digital network.

Either of these advances alone would fundamentally change our growth prospects. When combined, they’re more important than anything since the Industrial Revolution, which forever transformed how physical work was done.

We can’t predict exactly what new insights, products, and solutions will arrive in the coming years, but we are fully confident that they’ll be impressive. The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world. It will make mockery out of all that came before. Erik Brynjolfsson & Andrew McAfee


Tuesday 1st April 2014


NYT quote in 1997: “It may be a hundred years before a computer beats humans at Go”. Took 16 years. – Balaji S. Srinivasan, Andreessen Horowitz


Friday 25th April 2014

The Biggest Event in Human History

Artificial intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

There are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains. – Stephen Hawking


Monday 19th May 2014


One of the most popular topics in the digital consensus space (a new term for cryptocurrency 2.0 that I’m beta-testing) is the concept of decentralized autonomous entities

Many of us have been frustrated by the lack of coherent terminology here; as Bitshares’ Daniel Larimer points out, “everyone thinks a DAC is just a way of IPOing your centralized company.”

Organizations with Humans at the Center

1 – Boring old organizations (Humans at center)

Google is an organization in this classification, not an autonomous agent, because (1) the work is done by people, and (2) the management is done by people

2 – Robots (Assembly Lines)


Organizations with Automation at the Center

3 – Distributed Autonomous Organizations

I would place Uber halfway between an organization and a DAO (although it’s not decentralized) because the primary decision-making process, the matching of drivers with passengers, is done automatically, though not all aspects follow that model.

4 – Artificial Intelligence (holy grail)

Vitalk Buterin


Monday 23rd June 2014

Voice Recognition

Tim Tuttle, CEO and founder of Expect Labs, said in the last 18 months, voice recognition accuracy improved 30%—a bigger gain than the entire decade previous. A third of searches are now being done using voice commands.

Voice recognition uses machine learning algorithms that depend on people actually using them to get better. Tuttle believes we’re at the beginning of a virtuous cycle wherein wider adoption is yielding more data; more data translates into better performance; better performance results in wider adoption, more data, and so on – Jason Dorrier


Monday 7th July 2014

So far, 2014 has seen many exponential technologies move to the next level. It looks like we’ve hit an inflection point where big breakthroughs are now routine and happen too fast for anyone one person to keep up with (Bitcoin, 3D printing, Virtual Reality, Drones, Space, Robotics, Voice Translation, Telepresence, Deep Learning)  – Lee Banfield


Tuesday 8th July 2014

Deep Learning

When asked what he would focus on if he were currently a computer science student, and what would be the most significant type of technology in coming years, Bill Gates’ answer focused on machine learning:

“The ultimate is computers that learn. So called deep learning which started at Microsoft and is now being used by many researchers looks like a real advance that may finally learn.”

There has already been more progress in the last three years in video and audio recognition than had ever been made before, he says. This encompasses everything from basic machine-learning algorithms to ones that will one day be able to read a book and understand what it means – Max Nisen


Monday 14th July 2014

Achievable Developments in the Next 10 Years

I’m most excited about developments in the two areas that I’m pioneering: asteroid mining and the extension of the healthy human lifespan.

Through Planetary Resources, we expect to be identifying, prospecting and eventually mining materials from near-Earth asteroids well within this decade. This will create an economic engine that will propel humanity beyond lower orbit.

Through Human Longevity Inc, we will be creating the largest database of human genotypic phenotypic and microbiology data ever assembled and using machine learning to analyze it to truly understand disease and healthy aging. We feel we have the ability to extend the healthy human life by 30 to 40 years. For me, going to space and living longer — it doesn’t get better! – Peter Diamandis, Co-founder of Singularity University


Monday 14th July 2014

Linear vs. Exponential Thinking

I’m currently 17 do you think I will see things such as life extension due the the eradication or management of Cancer, dementia, 3D organ printing etc within my lifetime? What are some technologies that seem way off to your average person that may become a reality within the next decade?

If you’re only 17, you will see a hell of a lot more than just life extension! You’re going to see colonies on Mars. You’re going to see us turn most everything we think of as science fiction today into science fact. We’re living during a time of accelerating change. Don’t think with your linear mind! Yes, we will solve cancer. Yes, we will solve dementia. Yes, we will start regrowing organs. All of this will happen faster than you can imagine.

I think the average person is a linear thinker and most of the extraordinary breakthroughs that will happen this decade will initially seem far off. Then, when the breakthroughs occur, they will take them for granted. So just think about artificial intelligence becoming your physician, or artificial intelligence becoming your personal tutor better than the best Harvard professor, or 3-D printing organs, or extending the human lifespan 30 years, or landing the first private citizens on Mars with in the next 15 years. These things all sound crazy until we make them happen – Peter Diamandis, Co-founder of Singularity University

30 steps linearly gets you to 30…30 steps exponentially gets you to a billion – Ray Kurzweil


Wednesday 20th August 2014

The Quest to Build an Artificial Brain

Deep learning has suddenly spread across the commercial tech world, from Google to Microsoft to Baidu to Twitter, just a few years after most AI researchers openly scoffed at it.

All of these tech companies are now exploring a particular type of deep learning called convolutional neural networks, aiming to build web services that can do things like automatically understand natural language and recognize images. At Google, “convnets” power the voice recognition system available on Android phones. At China’s Baidu, they drive a new visual search engine.

But this is just a start. The deep learning community are working to improve the technology. Today’s most widely used convolutional neural nets rely almost exclusively on supervised learning. Basically, that means that if you want it to learn how to identify a particular object, you have to label more than a few examples. Yet unsupervised learning—or learning from unlabeled data—is closer to how real brains learn, and some deep learning research is exploring this area.

“How this is done in the brain is pretty much completely unknown. Synapses adjust themselves, but we don’t have a clear picture for what the algorithm of the cortex is,” says LeCun. “We know the ultimate answer is unsupervised learning, but we don’t have the answer yet.” – Daniela Hernandez


Monday 29th September 2014

AI Expert Predicts 50% of Web Searches Will Soon be Speech and Images

Baidu — the second-largest web search provider in the world, with its biggest user base in its home country of China — has been preparing its systems for a time when text will be just another option for searching, and not necessarily the default.

“In five years, we think 50 percent of queries will be on speech or images,” Andrew Ng, Baidu’s chief scientist and the head of Baidu Research, said Wednesday during a Gigaom meetup on his area of expertise, deep learning.

A type of artificial intelligence, deep learning involves training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.

“Speech and images are, in my view, a much more natural way to communicate [than text],” Ng said.

Indeed, already one out of all 10 queries Baidu receives comes through speech, he said. And pointing a smartphone camera at a handbag might identify a particular model more quickly than endlessly rephrasing a typed query. As Ng put it, “It’s easier to show us a picture.”

“I think that whoever wins AI will win the Internet,” Ng said – Jordan Novet


Monday 20th October 2014

Singularity University

Peter Diamandis read my book The Singularity is Near while hiking in Patagonia. We had a dinner and he said we should start a university based on these ideas because [the book] changed his perspective.

No other university really takes the view that all of these technologies—computation, artificial intelligence, biotechnology, and nanotechnology (which refers to manipulating materials as an information technology)—are progressing exponentially, and it is going to lead to revolutionary changes in the world. That’s the fundamental thesis of Singularity University. – Ray Kurzweil


Monday 20th October 2014

First Demonstration of AI on a Quantum Computer

A team of Chinese physicists have trained a quantum computer to recognize handwritten characters, the first demonstration of ‘quantum artificial intelligence”

“The successful classification shows the ability of our quantum machine to learn and work like an intelligent human,” say Li and co.

That’s an interesting result for artificial intelligence and more broadly for quantum computing. It demonstrates the potential for quantum computation, not just for character recognition, but for other kinds of big data challenges. “This work paves the way to a bright future where the Big Data is processed efficiently in a parallel way provided by quantum mechanics,” say the team. There are significant challenges ahead, of course. Not least of these is building more powerful quantum computers. The devices that rely on nuclear magnetic resonance cannot handle more than handful of qubits.

So physicists are racing to build quantum computers that can handle significantly more qubits. This is a race with fame and fortune at the end of it. There is no shortage of runners and the team that pulls it off will find an important place in the history of computing and physics in general.

With a few hundred qubits, who knows what quantum artificial intelligence could do. – The Physics arXiv Blog


Friday 21st November 2014

Artificial Intelligence

In “Future Progress in Artificial Intelligence: A Poll Among Experts” by Bostrom and Vincent C. Müller, the authors come to this conclusion:

“These results should be taken with some grains of salt, but we think it is fair to say that the results reveal a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075.

“From reaching human ability, it will move on to superintelligence in 2 years (10%) to 30 years (75%) thereafter. The experts say the probability is 31% that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.” – Jason Papallo


Tuesday 25th November 2014


Around 2002 I attended a small party for Google—before its IPO, when it only focused on search. I struck up a conversation with Larry Page, Google’s brilliant cofounder, who became the company’s CEO in 2011. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?”

My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad-auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site who thought it would not last long. But Page’s reply has always stuck with me: “Oh, we’re really making an AI.”

I’ve thought a lot about that conversation over the past few years as Google has bought 14 AI and robotics companies. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search contributes 80 percent of its revenue. But I think that’s backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI.

When you type “Easter Bunny” into the image search bar and then click on the most Easter Bunny-looking image, you are teaching the AI what an Easter bunny looks like. Each of the 12.1 billion queries that Google’s 1.2 billion searchers conduct each day tutor the deep-learning AI over and over again. With another 10 years of steady improvements to its AI algorithms, plus a thousand-fold more data and 100 times more computing resources, Google will have an unrivaled AI. My prediction: By 2024, Google’s main product will not be search but AIKevin Kelly


Tuesday 25th November 2014


An exponential trend seems unimportant until it is all important – Balaji S. Srinivasan


Tuesday 25th November 2014

Tech’s Pace is Like a Dozen Gutenberg Moments Happening at the Same Time

Drilling down into the concepts and consequences of our exponential pace, Singularity University’s global ambassador and founding executive director, Salim Ismail, set the stage.

We’re at an inflection point, he said, where we are digitizing and augmenting the human experience with technology. That digitization is accelerating change. The question is: How can individuals and society, more generally, navigate it?

Five hundred years ago, Johannes Gutenberg’s printing press freed information as never before. Ismail framed the current pace of technology as Gutenberg to the extreme, “We’re having about a dozen Gutenberg moments all at the same time.”

Ismail showed a video of someone riding in one of Google’s self-driving cars as it navigated an obstacle course at top speed. The rider is amazed and a little nervous—the video ends with him letting out a little involuntary scream. Today, the world is letting out a little collective Google scream. – Jason Dorrier


Tuesday 25th November 2014

The Business Plans of the Next 10,000 Startups Are Easy to Forecast: Take X and Add AI

A picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off.

This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize.

This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here. –Kevin Kelly


Saturday 13th December 2014

“Reverse engineering the neural cortex: We’re going to finish this off in less than five years”— Jeff Hawkins

Citrix Startup Accelerator’s chief technologist Michael Harries said, any entrepreneurs that aren’t familiarizing themselves with AI have “rocks in their heads.”

According to Modar Alaoui, AI’s immediate future lies in ambient intelligence in smartphones and smart cars.

Jeff Hawkins believes reverse engineering the neural cortex is the fastest way to intelligent machines. Neuroscience has shown that language and touch work on the same principles, and Hawkins expects a machine’s abilities to unfold in a similar way once scientists are able to tap inherent potential.

“Progress is incremental but also exponential. We’re going to finish this off in less than five years, I believe.”

If the thought of enlightened machines in the next five years is too much, Hawkins assured attendees that artificial intelligence isn’t inherently dangerous. The ability to self-replicate is dangerous, however.” – Jessica Lipsky


Monday 29th December 2014

“My take is that A.I. is taking over. A few humans might still be ‘in charge,’ but less and less so”

– Sebastian Thrun, Lead Developer of Google’s Driverless Car Project


Thursday 1st January 2015

One of the strangest and surprising areas of progress has been in the field of brainwaves and mind control. A Double Amputee Became the first person to Control Robotic Arms with his Mind, and a team of researcher successfully achieved brain-to-brain verbal communication across a distance of 5,000 miles.

Artificial Intelligence has captured the imagination of the world. The rate of progress is so powerful and dramatic that people as distinguished as Elon Musk are worrying that this technology might run away from us within just 5 years and bring us existential threats. – Lee Banfield


Sunday 18th January 2015

Artificial Intelligence

In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work. There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist – Robert Mcmillan


Saturday 24th January 2015

Dr. Kurzweil’s current predictions include:

1. self driving cars by 2017
2. personal assistant search engines by 2019
3. switching off our fat cells by 2020
4. full immersive virtual realities by 2023
5. 100 percent energy from solar by 2033

Dr. Kurzweil predicts that growth in the 3 areas — genetics, nanotechnology and robotics (GNR) — will be the basis of the singularity. In his book The Singularity Is Near he says, “It will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots. There will be no distinction, post singularity, between human and machine or between physical and virtual reality.” – Lucy Flores


Friday 30th January 2015


We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge 


As I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century.

He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years.

A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month.

All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.

This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict. When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either – Tim Urban


Friday 30th January 2015

Artificial Intelligence

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly.

While there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.

* Cars are full of ANI systems

* Your phone is a little ANI factory.

* Google Translate is another classic ANI

Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.


AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating an AGI is a much harder task than creating an ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” An AGI would be able to do all of those things as easily as you can.


AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.

As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything – Tim Urban


Thursday 5th March 2015


I used to say that this is the most important graph in all the technology business. I’m now of the opinion that this is the most important graph ever graphed. – Steve Jurvetson, DFJ Venture Capital


Thursday 5th March 2015

When Exponential Progress Becomes Reality

Human perception is linear, technological progress is exponential. Our brains are hardwired to have linear expectations because that has always been the case. Technology today progresses so fast that the past no longer looks like the present, and the present is nowhere near the future ahead. Then seemingly out of nowhere, we find ourselves in a reality quite different than what we would expect.

We are still prone to underestimate the progress that is coming because it’s difficult to internalize this reality that we’re living in a world of exponential technological change. It is a fairly recent development. And it’s important to get an understanding for the massive scale of advancements that the technologies of the future will enable. Particularly now, as we’ve reached what Kurzweil calls the “Second Half of the Chessboard.” – Niv Dror


Wednesday 18th March 2015

How Long Until the First Machine Reaches Superintelligence?

Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:

Howard Graph

Those people subscribe to the belief that this is happening soon—that expontential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.

The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.

Kurzweil’s depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.

I suggested earlier that our fate when this colossal new power is born rides on who will control that power and what their motivation will be. Kurzweil neatly answers both parts of this question with the sentence, “[ASI] is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us.” – Tim Urban


Monday 20th April 2015

The Great Turning Took Place a Decade Ago


The great turning took place a decade ago, while we were all distracted by social networking, smartphones and the emerging banking crisis. Its breathtaking climb since tells us that everything of the previous 40 years—that is, the multi-trillion-dollar revolution in semiconductors, computers, communications and the Internet—was likely nothing but a prelude, a warm-up, for what is to come. It will be upon this wall that millennials will climb their careers against almost-unimaginably quick, complex and ever-changing competition.

Crowd-sharing, crowdfunding, bitcoin, micro-venture funding, cloud computing, Big Data—all have been early attempts, of varying success, to cope with the next phase of Moore’s Law. Expect many more to come. Meanwhile, as always, this new pace will become the metronome of the larger culture.

Everything is now in play. Millennials face one of the greatest opportunities any generation has ever known: to completely remake the world through boundless digital technology.

The good news is that this generation seems to be already, often unconsciously, preparing for this adventure—through robotics competitions, gatherings of tech enthusiasts, engineers and tinkerers at Maker Faires and other do-it-yourself events, and playing with new applications for their drones and 3D printers. Having lived their entire lives at the pace of Moore’s Law, they seem to sense that the time has come to hit the accelerator. If millennials don’t entirely get it yet, they soon will – Michael S. Malone


Monday 27th April 2015

Building the Content Base for the Superintelligent AI

As soon as it has the learning algorithm, it just goes out on the internet and reads all the magazines and books and things like that.

We have essentially been building the content base for the super-intelligence. You think you’re using the internet, but that’s actually what you’re doing – Bill Gates


Monday 27th April 2015

Knowledge and Intuition

Today, machines can process regular spoken language and not only recognize human faces, but also read their expressions. They can classify personality types, and have started being able to carry out conversations with appropriate emotional tenor.

Machines are getting better than humans at figuring out who to hire, who’s in a mood to pay a little more for that sweater, and who needs a coupon to nudge them toward a sale. In applications around the world, software is being used to predict whether people are lying, how they feel and whom they’ll vote for.

To crack these cognitive and emotional puzzles, computers needed not only sophisticated, efficient algorithms, but also vast amounts of human-generated data, which can now be easily harvested from our digitized world. The results are dazzling. Most of what we think of as expertise, knowledge and intuition is being deconstructed and recreated as an algorithmic competency, fueled by big data – Zeynep Tufekci


Monday 11th May 2015

AI and Life Extension

Matt Schlicht: Are you completely focused on the bitcoin industry or are there other industries you are active in?

Roger Ver: In the future I plan to focus on AI and human life extension technologies.

Robert Kuhne: It’s good to hear that you are planning to invest in AI and life extension technologies, so to what extent do you agree with the “Singularity” hypothesis of Ray Kurzweil?

Roger Ver: I read all of Kurzweil’s books and was influenced by them. At the end of the day, I don’t know what is going to happen, but I intend to stay alive to find out. Interestingly enough, recently I was able to meet Ray’s son, Ethan Kurzweil, who works at a VC firm that is interested in investing in the Bitcoin space.


Monday 18th May 2015

Disruption of Healthcare

By 2025, existing healthcare institutions will be crushed as new business models with better and more efficient care emerge.

Thousands of startups, as well as today’s data giants (Google, Apple, Microsoft, SAP, IBM, etc.) will all enter this lucrative $3.8 trillion healthcare industry with new business models that dematerialize, demonetize and democratize today’s bureaucratic and inefficient system.

Biometric sensing (wearables) and AI will make each of us the CEOs of our own health. Large-scale genomic sequencing and machine learning will allow us to understand the root cause of cancer, heart disease and neurodegenerative disease and what to do about it. Robotic surgeons can carry out an autonomous surgical procedure perfectly (every time) for pennies on the dollar. Each of us will be able to regrow a heart, liver, lung or kidney when we need it, instead of waiting for the donor to die – Peter Diamandis


Monday 25th May 2015

Moore’s Law Keeps Going, Defying Expectations

Moore never thought his prediction would last 50 years. “The original prediction was to look at 10 years, which I thought was a stretch,” he told Friedman last week, “This was going from about 60 elements on an integrated circuit to 60,000—a 1,000-fold extrapolation over 10 years. I thought that was pretty wild. The fact that something similar is going on for 50 years is truly amazing.”

Many technologists have forecast the demise of Moore’s doubling over the years, and Moore himself states that this exponential growth can’t last forever. Still, his law persists today, and hence the computational growth it predicts will continue to profoundly change our world.

As he put it: “We’ve just seen the beginning of what computers are going to do for us.” – Annie Sneed


Tuesday 9th June 2015

Ray Kurzweil to Release New Book in 18 Months Titled “The Singularity is Nearer”

Appearing via a telepresence robot, Ray Kurzweil took the stage at the Exponential Finance conference to address questions posed by CNBC’s Bob Pisani.

During the discussion on the future of computing and whether Moore’s law was truly in jeopardy, Kurzweil took the opportunity to announce a sequel to The Singularity Is Near aptly titled The Singularity Is Nearer, planned for release in 18 months which will include updated charts.

It’s likely that the text will also aim to showcase his prediction track record, akin to a report [PDF] he released in 2010 titled “How My predictions Are Faring.”

He also explained that his team is utilizing numerous AI techniques to deal with language and learning, and in the process, collaborating with Google’s DeepMind, the recently acquired startup that developed a neural network that learned how to play video games successfully.

“Mastering intelligence is so difficult that we need to throw everything we have at it. I think we are very much on schedule to achieve human levels of intelligence by 2029.”

David J. Hill


Tuesday 9th June 2015

Kurzweil on the Danger of AI: I’m Not Sure Where Musk is Coming From

Kurzweil’s hopeful yet cautious point of view on artificial intelligence stands in contrast to Elon Musk, who caused a stir last year when he tweeted, “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”

In Bostrom’s book, he proposes scenarios in which humans suffer an untimely end due to artificial intelligence simply trying to fulfill its goals.

Troubled by the attention this tweet attracted, Kurzweil wanted to set the record straight:

“I’m not sure where Musk is coming from. He’s a major investor in artificial intelligence. He was saying that we could have superintelligence in 5 years. That’s a very radical position—I don’t know any practitioners who believe that.”

He continued, “We have these emerging existential risks and we also have emerging, so far, effective ways of dealing with it…I think this concern will die down as we see more and more positive benefits of artificial intelligence and gain more confidence that we can control it.”

“We are going to directly merge with it, we are going to become the AIs,” he stated, adding, “We’ve always used our technology to extend our reach…That’s the nature of being human, to transcend our limitations, and there’s always been dangers.”

Close to the end of the interview, Kurzweil offered a simple reason why he’s optimistic about AI: we have no other choice lest we accept a scenario in which a totalitarian government controls AI. He stated it simply: “The best way to keep [artificial intelligence] safe is in fact widely distributed, which is what we are seeing in the world today.” – David J. Hill


Monday 15th June 2015


Australian academics who teach mathematics may need to run new ideas by the Department of Defence before sharing them or risk imprisonment.

From November 2016 Australian academics could face a potential 10-year prison term for sending information overseas if their ideas fall within the Defence Strategic Goods List (DSGL).Put another way, they could be jailed for delivering online course material to foreign students or providing international peers with access to a server hosting that material.

The laws are intended to prohibit the transfer of knowledge from Australia that could be used to produce weapons.

Academics like Kevin Korb are nervous that “overly broad” definitions in the DSGL could land them in court for teaching cryptography, high performance computing, image and signals processing and a number of other fields.

To avoid penalty, researchers may need to report newly hatched lines of inquiry to DECO (Defence Export Control Office).

“You will be coming to us and we will be working with you,” a DECO officer recently explained to academics. “When your ideas aren’t necessarily that formed, it may be that we say to you, ‘Look, at the moment we don’t see any concern, come back to us at a further stage’.”

Korb, an artificial intelligence researcher at Monash University’s Information Technology faculty, said the new restrictions will “suffocate” research. “Researchers and students are already leaving or avoiding Australia,” he told Fairfax.

“What is likely to happen is that Australia becomes isolated as the research and the researchers move elsewhere. No one wants to work somewhere where there’s totalitarian-like controls on thought,” renowned US cryptographer Bruce Schneier told Fairfax. – Liam Tung


Monday 15th June 2015

The computer revolution is going to keep going. Moore’s Law has a lot of steam left in it. You might be hearing that Moore’s Law is going to fall off a wall… well, maybe there’ll be some fluctuations, but it’s going to keep going – Ralph Merkle 


Monday 15th June 2015

An Asymptote Toward Zero

The event horizon of a coming economic singularity where all prices drop down an asymptote toward zero as technology advances exponentially.

As ephemeralization escalates, as we can do “more and more with less and less until we can do almost anything with practically nothing,” as Buckminster Fuller stated, old technology, old energy sources etc slowly vanish.

What we, the human race, are facing as we race toward the event horizon of the coming economic and technological singularity is something that no human society or culture has ever experienced before – C. James Townsend


Monday 15th June 2015

The Singularity vs. The State

Plato’s cave is our status quo and we sit in our chains and are mesmerized by the pretty pictures on the wall. It is high time humanity grew up and finally left the cave.

No matter what the political state and its crony’s do, or try to do, they will fail as more and more individuals unite to help bring about the coming economic and technological singularity.

How can you regulate or ban such things as guns when you can print an entire AK 47 at home on a 3D printer?

Or because prices have dropped so low that you can have a fully equipped bio-lab in your garage how then can you suppress say an antiaging technology or a cure for cancer?

The techno-libertarians, techno-progressives and Transhumanist’s are becoming a force to be reckoned with (though I am concerned with the influence of technocracy among many Transhumanist’s) and if the political state moved to ban such things as Uber, Lyft, Airbnb etc you would have a riot on your hands and it is only going to get worse for the State.

I think more and more neo-leftist’s are going to wake up and realize with the Libertarians, that Statism is an old failed religion and that their empowerment and freedom will truly come from the evolutionary forces released by the Technium.

The left-wing Hegelian’s foresaw that the State was destined to wither away and a new holographic system, a holoarchy, would arise that would allow individuals to perfect themselves and to become the best they could be in a social and economic structure that gave them the time and abundance to do so.

If the techno-libertarians have taken up this course of action because we on “the left” have ignorantly abandoned it, then that is to our shame. We will have to play catch up and join them on the evolutionary journey to a new earth, one in which the arising Noosphere, the Global Brain, has fully evolved and the present order has been transcended.

I truly believe that the political State’s days are numbered, especially as profits and prices drop as technological deflation accelerates the emperor will be seen by more and more people to be naked and standing in their way to the fuller life they wish to live.

The coming singularity is already showing us that the locus of power is shifting back to individuals united and interrelated in a new distributed network system. We have to have faith and trust in this new arising paradigm and complex system and help it along, to be its midwives, but instead I see too many people manipulated and moved by ideological fearmongering to prop up the old order and its outmoded ideas.

The old order has no answers for us and no solutions; in fact it has caused all of our problems that we are now dealing with.

I think in the end it may very well wither away from disuse as more and more people leave it alone and turn to technological solutions and innovations to solve their and the world’s problems as A.J. Galambos theorized that we would finally learn to invent the technology that will give us the ability to have absolute liberty and freedom. To be in total possession of our primary property, which are ourselves, and all of our creative talents. – C. James Townsend


Sunday 28th June 2015

How Computers Will Crack the Genetic Code and Improve Billions of Lives

Machine learning and data science will do more to improve healthcare than all the biological sciences combined.

Human Longevity Inc. (HLI) is working on the most epic challenge — extending the healthy human lifespan.

Your genome consists of approximately 3.2 billion base pairs (your DNA) that literally code for “you.”

Your genes code for what diseases you might get, whether you are good at math or music, how good your memory is, what you look like, what you sound like, how you feel, how long you’ll likely live, and more.

This means that if we can decipher this genomic “code,” we can predict your biological future and proactively work to anticipate and improve your health.

It’s a data problem — and if you are a data scientist or machine-learning expert, it is the most challenging, interesting and important problem you could ever try to tackle.

When we compare your sequenced genome with millions of other people’s genomes AND other health data sets (see below), we can use machine learning and data mining techniques to correlate certain traits (eye color, what your face looks like) or diseases (Alzheimer’s, Huntington’s) to factors in the data and begin to develop diagnostics/therapies around them.

HlI - Biological Data

It’ a Translation Problem, Like Google Translate

With millions and millions of documents/websites/publications online that were already translated, and a crowd of 500 million users to correct and “teach” the algorithm, GT can quickly and accurately translate between 90 different languages.

Our challenge now is applying similar techniques to all of this genomic and integrated health records… and we found the perfect person to lead this effort: Franz Och — the man responsible for building Google Translate.

Franz is a renowned expert in machine learning and machine translation. He spent 10 years at Google as a distinguished research scientist and the chief architect of Google Translate, literally building the system from the ground up.

Now, Franz is Human Longevity Inc.’s chief data scientist, responsible for developing new computational methods to translate between all of the human biological information.

When you ask Franz why he’s so excited about HLI, his answer is twofold: the mission and the challenge.

Franz explains, “The big thing is the mission — the ability to affect humanity in a positive way. If you are a data scientist, why focus on making a better messaging app or better Internet advertising, when you could be advancing the understanding of disease to make sick people better and of aging to make people live longer, healthier lives?”

As far as the challenge, he goes on: “The big mission is to learn how to interpret the human genome — to be able to predict anything that can be predicted from the source code that runs us.” – Peter Diamandis


Sunday 28th June 2015

An Asymptote Toward Zero

The event horizon of a coming economic singularity where all prices drop down an asymptote toward zero as technology advances exponentially.

As ephemeralization escalates, as we can do “more and more with less and less until we can do almost anything with practically nothing,” as Buckminster Fuller stated, old technology, old energy sources etc slowly vanish – C. James Townsend


* Cost of digital processing, storage and bandwidth all crashing to price point of $0 (Oracle Sales Erode as Startups Embrace Souped-Up Free Software)- Max Keiser

* Things getting cheap = we becoming rich – Max Roser

* Under a deflationary, i.e. free-market, monetary system, all prices would look like this – Rothbardian

Embedded image permalink


Wednesday 15th July 2015

Wall Street Tries Out Research Reports Written by Artificial Intelligence

Each day, Wall Street churns out millions of words encouraging investors to buy or sell stocks, bonds and mutual funds.

Now, a host of startups that use artificial intelligence to write news stories and other reports have set their sights on writing work at banks and financial-service companies.

As artificial intelligence takes on ever more tasks, Wall Street is getting more comfortable putting it to use. Programs such as Narrative Science’s essentially take data from filings, databases or internal documents and then use algorithms to synthesize the information for corporate presentations or product descriptions.

Swiss bank Credit Suisse Group AG is using automated writing to provide clients with corporate summaries on thousands of companies.

Fund companies T. Rowe Price GroupInc. and American Century Investments are testing automated-writing products that would tell customers how the companies’ funds invest money in a variety of stock-market strategies – Stephanie Yang


Wednesday 15th July 2015

An Asymptote Toward Zero

“As ephemeralization escalates we can do more and more with less and less until we can do almost anything with practically nothing” – Buckminster Fuller


The US economy: a tug of war between Moore’s Law (hyperdeflation) and federal subsidies (price inflation)

* Technology: disruption, automation, and an exponential drop in prices.

* Policy: bailout, subsidies, and continually rising prices.

Balaji S. Srinivasan


Wikipedia: free

EdX: free

Codecademy: free

Twitter: free

Google Search: free

Coursera/Khan Academy: free

Learn something new everyday, $0.

Mohammed Al Saqqaf


* Bloomberg Commodity Price Index near 14-yr low – BCOM Quote

* Whatever isn’t deflationary deserves to die – Urban Future (2.1)


Friday 28th August 2015

The Real Giant Leap for Mankind


Neil Armstrong calling the moon landing “a giant leap for mankind” is not the correct wording.

Landing on the moon is in the same category as putting the first man in space or the first person climbing Mt. Everest—it’s a great achievement for mankind. But if the first ocean animal to touch dry land simply lay there for a minute before being washed back into the ocean, it would not qualify as a giant leap for life, and the moon landing shouldn’t either.

It’s only when certain mutated fish began to live on land in a sustainable way that life as a whole made a giant leap.

Previous Giant Leaps:

* Simple cells ——> complex cells ———> multicellular life.

* Life explodes into diversity

* Life emerges from the ocean onto land

* An increasing diversification of the great apes leading to the human-chimpanzee tribe split

* The progression of the homo genus that eventually led to humans.

* Major Migrations

* The development of language, farming and writing

* The birth of the industrialized world

Colonizing Mars permanently will be a giant leap for mankind.


But shouldn’t we pause for a minute and note that it’s a little weird that after 3.8 billion years—38,000,000 centuries—that I’m claiming that this century, we may witness a giant leap on par with the six or seven greatest leaps in history? How could that possibly be?

And wait, this reminds me of something. When we dove into artificial intelligence, it certainly seemed like A) something that might explode into superintelligence in the next century, and B) something that might permanently and dramatically affect all life on the planet (for better or worse). Would that also qualify as a potential giant leap?

And—as our understanding of the human genome advances and the science of genetic engineering races forward, isn’t it conceivable that in 100 years, science may have figured out how to keep humans alive for much longer than a normal biological lifespan and put people through legitimate reverse-aging procedures? If that happened and we conquered aging, wouldn’t that also make the big, big list of significant events in life history?

What the hell is going on??

Either I’m being hopelessly naive or this is a very intense time to be alive.


When a species becomes so powerful that they can achieve giant grand-scale life leaps in under a century, they can essentially play god, in many different ways. Let’s call that reaching the God Point.

If progress is indeed accelerating, it makes sense that an advanced species would eventually hit the God Point, and there seems to be plenty of evidence that humans are either already there or very close—advancements in fields like space travel, artificial intelligence, biotechnology, particle physics, nanotechnology, and weaponry open the door to a long list of unthinkably-dramatic impacts on the future.

The reality is that we’re living in a time when we could witness multiple events in our lifetimes as impactful as life going from the ocean to land. Not only might we be on the cusp of the great leap of life becoming multi-planetary, we may be on the cusp of a bunch of other great leaps as well.


There are other signs pointing to this being an extraordinarily unusual time to be alive:

  • For 99.8% of human history, the world population was under 1 billion people. In the last .2% of that history, it has crossed the 1, 2, 3, 4, 5, 6, and 7 billion marks.
  • Up until 25 years ago, there had never been such a thing as a global brain of godlike information access and connectivity on this planet. Today we have the internet.
  • Humans walked around or rode horses for 999 of the last 1,000 centuries. In this century, we drive cars, fly planes, and land on the moon.
  • If extra-terrestrial life were looking for other life in the universe, it would be dramatically easier to find us this century than in any century before, as we project millions of signals out into space.


If we take a step back and just look at the situation, it should be clear that nothing that’s happening right now is normal.

 Current humans have FAR more power than any life on Earth ever has, and it seems very likely that if in a billion years, an alien history major writes a term paper on the history of life on Earth, the time we’re living in right now—however it turns out—will be a major part of that paper.

When a planet’s life reaches high intelligence, it usually means they’re a couple hundred thousand years away from their do-or-die moment. Their progress will accelerate faster and faster until finally they hit the God Point, when they simultaneously gain the power to forever end species vulnerability or drive themselves accidentally extinct—and it’s all about which comes first.

Those species who hit the God Point, then enter the chaos that inevitably ensues, and somehow come out alive on the other side have “made it through,” and they can officially join the universe’s community of grown-up, immortal, intelligent species.


More than any particular Mars population goal, Elon Musk wants to die knowing we’re on our way to what he describes as “the threshold at which even if the spaceships from Earth stop coming, the colony doesn’t slowly die out.” That, he says, “is the critical threshold for us as a civilization to not join the potentially large number of one-planet dead civilizations out there.” A million people is his rough estimate for where that threshold lies, but no one knows for sure.

When—if—we do one day get to that point, only then will we have made the giant leap for mankind Neil Armstrong referred to. Humanity’s future will be much more secure and much more likely to survive deep into the future.

My gut says that we’re probably much closer to the beginning of The Story of Humans and Space than the middle or the end. It seems like we’re right around the end of “Chapter 1: Confined to Earth”—maybe on the very last page. And as the story moves forward, it may begin to take place on a much wider stage than the Earth, making The Story of Humans and Space ultimately indistinguishable from The Story of Humans.

It’s no more possible to predict what will happen in those chapters than it would have been for a farmer in 2500 BC Mesopotamia to envision our world today. – Tim Urban


Wednesday 9th September 2015

As More Work is Being Done by Machines, What will Humans be Good for?

One of the things I like about Kurzweil is that he doesn’t get bogged down in the politics of such issues, but it’s evident that he understands how markets work. He realizes that as menial tasks are replaced by automation (as proven historically), people will find more fulfilling things to do with their time. Everybody benefits when innovation occurs, just sometimes not right now for certain individuals.

He’s actually the person that got me on the libertarian track because he spoke about the deflationary effect of technological progress in Age of Spiritual Machines and how it was ultimately going to be a positive development. This went counter to everything I had been taught in undergrad economics, so my ears definitely pricked up when he said that.

When I did more research on that subject, I found a multitude of information on it from free-market economists and ultimately accepted that all innovation and wealth generation in modern times is as a result of liberty surrounding our economic lives.

As Kurzweil so insightfully explains in that book, if you try to hold back the advancement of technologies, you simply drive them underground into the criminal world and/or centralize them into governments. I would go on to say that if you hold back the economic freedom of individuals within societies, you will eliminate much of the benefit that technological advancement affords. – BradskyB


Wednesday 9th September 2015

Is a Cambrian Explosion Coming for Robotics?

Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth.

In Cloud Robotics—a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows.

Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. Interestingly, Li (2014) noted that one of the robotic capabilities recently enabled by these combined technologies is vision—the same capability that may have played a leading role in the Cambrian Explosion.

How soon might a Cambrian Explosion of robotics occur? It is hard to tell.

The very fast improvement of Deep Learning has been surprising, even to experts in the field. The recent availability of large amounts of training data and computing resources on the cloud has made this possible; the algorithms being used have existed for some time and the learning process has actually become simpler as performance has improved.

The timing of tipping points is hard to predict, and exactly when an explosion in robotics capabilities will occur is not clear. Commercial investment in autonomy and robotics—including and especially in autonomous cars—has significantly accelerated, with high-profile firms like Amazon, Apple, Google, and Uber, as well as.

Human beings communicate externally with one another relatively slowly, at rates on the order of 10 bits per second. Robots, and computers in general, can communicate at rates over one gigabit per second—or roughly 100 million times faster. Based on this tremendous difference in external communication speeds, a combination of wireless and Internet communication can be exploited to share what is learned by every robot with all robots.

Human beings take decades to learn enough to add meaningfully to the compendium of common knowledge. However, robots not only stand on the shoulders of each other’s learning, but can start adding to the compendium of robot knowledge almost immediately after their creation.

The online repository of visually recorded objects and human activity is a tremendous resource that robots may soon exploit to improve their ability to understand and interact with the world, including interactions with human beings. Social media sites uploaded more than 1 trillion photos in 2013 and 2014 combined, and given the growth rate may upload another trillion in 2015.

The key problems in robot capability yet to be solved are those of generalizable knowledge representation and of cognition based on that representation. How can computer memories represent knowledge to be retrieved by memory-based methods so that similar but not identical situations will call up the appropriate memories and thoughts?

Significant cues are coming from the expanding understanding of the human brain, with the rate of understanding accelerating because of new brain imaging tools. Some machine learning algorithms, like the Deep Learning approached discussed earlier, are being applied in an attempt to discover generalizable representations automatically.

It is not clear how soon this problem will be solved. It may only be a few years until robots take off—or considerably longer. Robots are already making large strides in their abilities, but as the generalizable knowledge representation problem is addressed, the growth of robot capabilities will begin in earnest, and it will likely be explosive. The effects on economic output and human workers are certain to be profound. – Gill A. Pratt


Friday 2nd October 2015

Rise of the Machines: The Industrial Internet of Things is Taking Shape

Manufacturing spencer cooper Flickr

Fortunes will be made acquiring analogue manufacturers and digitizing them. Comparable to Russian privatization. – Pierre Rochard


Beyond smart watches and FitBits, forward-thinking businesses are applying the concept of IoT to complex, psychical machinery, like jet engines and locomotives, to unleash unexpected growth opportunities and fuel innovation.

Combined with data analytics, companies can leverage the industrial IoT to impact the economy, the job market and the future; and it has the potential to add $15 trillion to the global economy in 2030, according to Accenture.

The first iteration of the IoT saw users quickly adopting wearable devices to track everything from nutritional intake and sleep patterns to calories burned and steps taken. Following suit, businesses began to look at how they could use data from connected sensors to optimize how they functioned.

Now they are taking this a step further and applying these insights to bigger machines equipped with data-gathering sensors. This new movement, the industrial IoT, is accelerating the connection of objects with humans and also with other objects to reveal deep insights.

Now, small sensors on pieces of complex machinery can emit data about performance status that can then be used to adjust scheduled maintenance. With this data, teams can predict maintenance failures, proactively prevent them and ultimately reduce down time.

This approach has also been applied to resource allocation and energy management. Equipped with sensors on onshore and offshore oil pumps, companies can minimize lost production and save massive amounts of money. – Stefan Groschupf


Wednesday 28th October 2015

A Few Amazing Things We Have Today, That Back to the Future Missed

Marty Jr. viewing an incoming phone call in his glasses.

  1. Rapid, cheap whole genome sequencing and editing: We now have the ability to sequence a full human genome for under $1,000. The technology is developing at 3x the rate of Moore’s Law. We now have the ability to cheaply and precisely edit the genome with CRISPR/CAS 9. This will open up a new frontier of health and longevity that will have enormous implications on the future.
  2. 3D Printing: You can 3D print just about anything these days from 300 different materials — plastics, metals, concrete, chocolates, human cells. Complexity is free and scalability is inherent.
  3. Emergence of AI: We are in the early days of artificial intelligence. Tens of billions in capital are being poured into an AI “arms race” over the last decade. One fun recent example is Tesla’s “autopilot” software upgrade that just came out — their AI can drive you autonomously on the highway.
  4. On-Demand Economy: Amazon is working on same-day delivery mechanisms (possibly using drones). Uber has become ubiquitous as the simplest, most reliable way to get around.
  5. GPS: We really take for granted how good the GPS units in our phones really are. They receive up-to-the-second traffic data, route us to the shortest path, and even give us “street view” or satellite imagery to investigate what a place looks like before we get there.
  6. Private Spaceflight and Hyperloop: While Back to the Future flaunted flying DeLoreans, I’m proud of where we are with private spaceflight and the start of Hyperloop.

Peter Diamandis


Saturday 21st November 2015

Great News: Intro to Computer Science Overtakes Economics as Harvard’s Most Popular Class

The most popular fall-semester course at Harvard is Introduction to Computer Science I.

The tech course enrolled almost 820 students for the current fall semester. That total is the highest in the three decades the course has been offered and it’s the biggest class offered at Harvard in at least a decade, according to The Harvard Crimson.

Most interesting, though, is that the course has supplanted “Introduction to Economics” as the Ivy League school’s most popular course. – Tom Huddleston, Jr.


Saturday 21st November 2015

Learning in Virtual Reality

Humanity is standing on a precipice. We have never been closer to achieving a world where everyone has the ability to live and thrive. Biotech, nanotech and AI promise to reshape the world and have the potential to imbue humanity with near-godlike powers.

Virtual reality (VR) is often called the “final medium” due to its unparalleled power to share experiences and ideas. VR films and stories are shockingly effective at generating empathy and creating the impetus for action. VR education will allow us to learn faster and more interactively than ever before. And VR collaboration spaces will allow us to work from anywhere to solve the world’s grand challenges.

The potential use cases for VR in classrooms are endless:

A history teacher could lead his or her class on a tour of ancient Rome, providing a visceral connection to the past which was never before possible.

Science teachers can take their students to another galaxy or shrink them down and show them chemical reactions from the molecular scale.

Imagine a physics class where students take a trip to Mars, learn the physics of launching a rocket to orbit and then work with a group to plan out a rocket launch.  – Jason Ganz


Saturday 12th December 2015

When Computers Dematerialize into Everyday Objects

Jason Silva / Qualcomm


Tuesday 29th December 2015

Building The Quantum Dream Machine

John Martinis has been researching how quantum computers could work for 30 years. Now he could be on the verge of finally making a useful one.

With his new Google lab up and running, Martinis guesses that he can demonstrate a small but useful quantum computer in two or three years. “We often say to each other that we’re in the process of giving birth to the quantum computer industry,” he says.

The new computer would let a Google coder run calculations in a coffee break that would take a supercomputer of today millions of years.

The software that Google has developed on ordinary computers to drive cars or answer questions could become vastly more intelligent. And earlier-stage ideas bubbling up at Google and its parent company, such as robots that can serve as emergency responders or software that can converse at a human level, might become real.

As recently as last week the prospect of a quantum computer doing anything useful within a few years seemed remote. Researchers in government, academic, and corporate labs were far from combining enough qubits to make even a simple proof-of-principle machine.

A well-funded Canadian startup called D-Wave Systems sold a few of what it called “the world’s first commercial quantum computers” but spent years failing to convince experts that the machines actually were doing what a quantum computer should.

Then NASA summoned journalists to building N-258 at its Ames Research Center in Mountain View, California, which since 2013 has hosted a D-Wave computer bought by Google.

There Hartmut Neven, who leads the Quantum Artificial Intelligence lab Google established to experiment with the D-Wave machine, unveiled the first real evidence that it can offer the power proponents of quantum computing have promised.

In a carefully designed test, the superconducting chip inside D-Wave’s computer—known as a quantum annealer—had performed 100 million times faster than a conventional processor.

However, this kind of advantage needs to be available in practical computing tasks, not just contrived tests. “We need to make it easier to take a problem that comes up at an engineer’s desk and put it into the computer,” said Neven.

That’s where Martinis comes in. Neven doesn’t think D-Wave can get a version of its quantum annealer ready to serve Google’s engineers quickly enough, so he hired Martinis to do it.

“It became clear that we can’t just wait,” Neven says. “There’s a list of shortcomings that need to be overcome in order to arrive at a real technology.”

He says the qubits on D-Wave’s chip are too unreliable and aren’t wired together thickly enough. (D-Wave’s CEO, Vern Brownell, responds that he’s not worried about competition from Google.)

Google will be competing not only with whatever improvements D-Wave can make, but also with Microsoft and IBM, which have substantial quantum computing projects of their own.

But those companies are focused on designs much further from becoming practically useful. Indeed, a rough internal time line for Google’s project estimates that Martinis’s group can make a quantum annealer with 100 qubits as soon as 2017.

The difficulty of creating qubits that are stable enough is the reason we don’t have quantum computers yet. But Martinis has been working on that for more than 11 years and thinks he’s nearly there.

The coherence time of his qubits, or the length of time they can maintain a superposition, is tens of microseconds—about 10,000 times the figure for those on D-Wave’s chip.

Martinis aims to show off a complete universal quantum computer with about 100 qubits around the same time he delivers Google’s new quantum annealer, in about two years.

He thinks that once he can get his qubits reliable enough to put 100 of them on a universal quantum chip, the path to combining many more will open up. “This is something we understand pretty well,” he says. “It’s hard to get coherence but easy to scale up.”

Figuring out how Martinis’s chips can make Google’s software less stupid falls to Neven.

He thinks that the prodigious power of qubits will narrow the gap between machine learning and biological learning—and remake the field of artificial intelligence. “Machine learning will be transformed into quantum learning,” he says. That could mean software that can learn from messier data, or from less data, or even without explicit instruction.

Neven muses that this kind of computational muscle could be the key to giving computers capabilities today limited to humans. “People talk about whether we can make creative machines–the most creative systems we can build will be quantum AI systems,” he says.

Neven pictures rows of superconducting chips lined up in data centers for Google engineers to access over the Internet relatively soon.

“I would predict that in 10 years there’s nothing but quantum machine learning–you don’t do the conventional way anymore,” he says.

A smiling Martinis warily accepts that vision. “I like that, but it’s hard,” he says. “He can say that, but I have to build it.” – Tom Simonite


Wednesday 20th January 2016

The Law of Accelerating Returns

Graphic from Singularity is Near, demonstrating

Graphic from Singularity is Near, demonstrating “Law of Accelerating Returns” in the field of computation

* The sixth paradigm – three-dimensional computing – is already underway

Moore’s Law only refers to the exponential price-performance improvements of integrated circuits (over the last 50 years).

Exponential growth has been going on for a much longer period and is occurring in other fields outside of computing, such as communication and genomics

Such exponential growth is actually described by “The Law of Accelerating Returns,” a term coined by my friend and Singularity University Chancellor/Co-founder Ray Kurzweil.

As Ray Kurzweil described in his most excellent book, The Singularity Is Near, exponential growth in computation has existed for over a century, and has gone through five different paradigms of exponential growth:

  • 1st Paradigm: Electromechanical computers
  • 2nd Paradigm: Relay-based computers
  • 3rd Paradigm: Vacuum-tube based computers
  • 4th Paradigm: Transistor-based computers
  • 5th Paradigm: Integrated circuits (Moore’s Law)

Moore’s Law (the 5th paradigm of computation) is therefore a subset of a much broader exponential principle described by Kurzweil’s Law of Accelerating Returns.

It’s important to note that Ray recently mentioned to me that the sixth paradigm – three-dimensional computing – is already underway. – Peter Diamandis


Wednesday 20th January 2016

Exponential Developments

To paraphrase Kurzweil… The Law of Accelerating Returns also explains exponential advancement of life (biology) on this planet.

Looking at biological evolution on Earth, the first step was the emergence of DNA, which provided a digital method to record the results of evolutionary experiments.

Then, the evolution of cells, tissues, organs and a multitude of species that ultimately combined rational thought with an opposable appendage (i.e., the thumb) caused a fundamental paradigm shift from biology to technology.

The first technological steps — sharp edges, fire, the wheel — took tens of thousands of years. For people living in this era, there was little noticeable technological change in even a thousand years.

By 1000 A.D., progress was much faster and a paradigm shift required only a century or two.

In the 19th century, we saw more technological change than in the nine centuries preceding it.

Then in the first 20 years of the 20th century, we saw more advancement than in all of the 19th century.

Now, paradigm shifts occur in only a few years’ time. The World Wide Web did not exist in anything like its present form just a decade ago, and didn’t exist at all two decades before that.

As these exponential developments continue, we will begin to unlock unfathomably productive capabilities and begin to understand how to solve the world’s most challenging problems. There has never been a more exciting time to be alive. – Peter Diamandis


Wednesday 17th February 2016

Marvin Minsky, “Father of Artificial Intelligence,” Dies at 88

Marvin Minsky combined a scientist’s thirst for knowledge with a philosopher’s quest for truth as a pioneering explorer of artificial intelligence, work that helped inspire the creation of the personal computer and the Internet.

His family said the cause of death was a cerebral hemorrhage.

Well before the advent of the microprocessor and the supercomputer, Professor Minsky, a revered computer science educator at M.I.T., laid the foundation for the field of artificial intelligence by demonstrating the possibilities of imparting common-sense reasoning to computers.

“Marvin was one of the very few people in computing whose visions and perspectives liberated the computer from being a glorified adding machine to start to realize its destiny as one of the most powerful amplifiers for human endeavors in history,” said Alan Kay, a computer scientist and a friend and colleague of Professor Minsky’s.

Fascinated since his undergraduate days at Harvard by the mysteries of human intelligence and thinking, Professor Minsky saw no difference between the thinking processes of humans and those of machines.

Beginning in the early 1950s, he worked on computational ideas to characterize human psychological processes and produced theories on how to endow machines with intelligence.

Professor Minsky, in 1959, co-founded the M.I.T. Artificial Intelligence Project (later the Artificial Intelligence Laboratory) with his colleague John McCarthy, who is credited with coining the term “artificial intelligence.”

Professor Minsky’s scientific accomplishments spanned a variety of disciplines.

He designed and built some of the first visual scanners and mechanical hands with tactile sensors, advances that influenced modern robotics.

In 1951 he built the first randomly wired neural network learning machine, which he called Snarc.

In 1956, while at Harvard, he invented and built the first confocal scanning microscope, an optical instrument with superior resolution and image quality still in wide use in the biological sciences. – Glenn Rifkin


Wednesday 17th February 2016

The Most Foundational Person after Turing on Computability

Marvin Minsky is rightly being remembered for his foundational and continuing contributions to Artificial Intelligence. But let’s not forget that he was also the most foundational person after Turing on computability.

His 1967 book Computation: Finite and Infinite Machines is a mathematical masterpiece—I shall pull it off my bookshelf when I get home tonight from a multi-day trip and savor it once again, as I have so many times before.

That book pulled together the threads of what computation meant, from Turing, Post, and Kleene, into coherent mathematical unity, and extended the questions and answers around what could be computed with many new insights and theorems.

With this foundation, the theory of computation could turn to algorithmic complexity which has dominated the field ever since. That book is an amazing tour de force. For a normal mortal it would have been the defining point of a life’s work. Forget about all that AI stuff he did! – Rodney A. Brooks


Wednesday 17th February 2016

Remembering Minsky

He was the consummate educator, for that was his greatest joy and passion. But he was also many other things: a scientist, a mathematician, an inventor, an engineer, a roboticist, a writer, a philosopher, a polymath, a poet, a musician, and most of all a student of human nature and thinking.

He was the principal pioneer of both the symbolic and connectionist schools of AI and made profound contributions that have enriched the field of computer science and all of science. He was one of humanity’s great thinkers. He was also my only mentor. He will be deeply missed. – Ray Kurzweil


Wednesday 17th February 2016

Minsky on the Singularity

Is the Singularity near?

The answer is yes, depending on what you mean by near, but it may well be within our lifetimes. – Marvin Minsky, 2010


Sunday 13th March 2016

Investing in Robotics and AI Companies

Here are some AI (and robotics) related companies to think about.

I’m not saying you should buy them (now) or sell for that matter, but they are definitely worth considering at the right valuations.

Think about becoming an owner of AI and robotics companies while there is still time. I plan to buy some of the most obvious ones (including Google) in the ongoing market downturn (2016-2017).

Top 5 most obvious AI companies

  • Alphabet (Google)
  • Facebook (M, Deep Learning)
  • IBM (Watson, neuromorphic chips)
  • Apple (Siri)
  • MSFT (skype RT lang, emo)
  • Amazon (customer prediction; link to old article)

Yes, I’m US centric. So sue me 🙂


  • SAP (BI)
  • Oracle (BI)
  • Sony
  • Samsung
  • Twitter
  • Baidu
  • Alibaba
  • NEC
  • Nidec
  • Nuance (HHMM, speech)
  • Marketo
  • Opower
  • Nippon Ceramic
  • Pacific Industrial

Private companies (*I think):

  • *Mobvoi
  • *Scaled Inference
  • *Kensho
  • *Expect Labs
  • *Vicarious
  • *Nara Logics
  • *Context Relevant
  • *MetaMind
  • *Rethink Robotics
  • *Sentient Technologies
  • *MobileEye

General AI areas to consider when searching for AI companies

  • Self-driving cars
  • Language processing
  • Search agents
  • Image processing
  • Robotics
  • Machine learning
  • Experts
  • Oil and mineral exploration
  • Pharmaceutical research
  • Materials research
  • Computer chips (neuromorphic, memristors)
  • Energy, power utilities

Mikael Syding


DeepMind Smartphone Assistant

The movie Her is just an easy popular mainstream view of what that sort of thing is. We would like these smartphone assistant things to actually be smart and contextual and have a deeper understanding of what you’re trying to do.

At the moment most of these systems are extremely brittle — once you go off the templates that have been pre-programmed then they’re pretty useless. So it’s about making that actually adaptable and flexible and more robust.

It’s this dichotomy between pre-programmed and learnt. At the moment pretty much all smartphone assistants are special-cased and pre-programmed and that means they’re brittle because they can only do the things they were pre-programmed for. And the real world’s very messy and complicated and users do all sorts of unpredictable things that you can’t know ahead of time.

Our belief at DeepMind, certainly this was the founding principle, is that the only way to do intelligence is to do learning from the ground up and be general.

I think in the next two to three years you’ll start seeing it. I mean, it’ll be quite subtle to begin with, certain aspects will just work better. Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities. – Demis Hassabis


Sunday 13th March 2016

Google’s AI Takes Historic Match Against Go Champ

*  Machines have conquered the last games. Now comes the real world

Google’s artificially intelligent Go-playing computer system has claimed victory in its historic match with Korean grandmaster Lee Sedol after winning a third straight game in this best-of-five series.

Go is exponentially more complex than chess and requires an added level of intuition—at least among humans. This makes the win a major milestone for AI—a moment whose meaning extends well beyond a single game.

Just two years ago, most experts believed that another decade would pass before a machine could claim this prize. But then researchers at DeepMind—a London AI lab acquired by Google—changed the equation using two increasingly powerful forms of machine learning, technologies that allow machines to learn largely on their own. Lee Sedol is widely regarded as the best Go player of the past decade. But he was beaten by a machine that taught itself to play the ancient game..

The machine learning techniques at the heart of AlphaGo already drive so many services inside the Internet giant—helping to identify faces in photos, recognize commands spoken into smartphones, choose Internet search results, and much more. They could also potentially reinvent everything from scientific research to robotics

The machine plays like no human ever would—quite literally.

Using what are called deep neural networks—vast networks of hardware and software that mimic the web of neurons in the human brain—AlphaGo initially learned the game by analyzing thousands of moves from real live Go grandmasters. But then, using a sister technology called reinforcement learning, it reached a new level by playing game after game against itself, coming to recognize moves that give it the highest probability of winning.

The result is a machine that often makes the most inhuman of moves.

This happened in Game Two—in a very big way. With its 19th move, AlphaGo made a play that shocked just about everyone, including both the commentators and Lee Sedol, who needed nearly fifteen minutes to choose a response. The commentators couldn’t even begin to evaluate AlphaGo’s move, but it proved effective. Three hours later, AlphaGo had won the match.

This week’s match is so meaningful because this ancient pastime is so complex. As Google likes to say of Go: there are more possible positions on the board than atoms in a universe.

Just a few days earlier, most in the Go community were sure this wasn’t possible. But these wins were decisive. Machines have conquered the last games. Now comes the real world. – Cade Metz


Sunday 13th March 2016

DeepMind Founder Demis Hassabis Wants to Solve Intelligence

The aim of DeepMind is not just to beat games, fun and exciting though that is. It’s to the extent that they’re useful as a testbed, a platform for trying to write our algorithmic ideas and testing out how far they scale and how well they do and it’s just a very efficient way of doing that. Ultimately we want to apply this to big real-world problems.

We’re concentrating on the moment on things like healthcare and recommendation systems, these kinds of things.

What I’m really excited to use this kind of AI for is science, and advancing that faster. I’d like to see AI-assisted science where you have effectively AI research assistants that do a lot of the drudgery work and surface interesting articles, find structure in vast amounts of data, and then surface that to the human experts and scientists who can make quicker breakthroughs.

I was giving a talk at CERN a few months ago; obviously they create more data than pretty much anyone on the planet, and for all we know there could be new particles sitting on their massive hard drives somewhere and no-one’s got around to analyzing that because there’s just so much data. So I think it’d be cool if one day an AI was involved in finding a new particle. – Demis Hassabis


Sunday 13th March 2016

Does Google Deepmind’s A.I. Exhibit Super-Human Abilities? Some Japanese Pros Think So:

Tuur Demeester


Sunday 13th March 2016

The Power and the Mystery

At first, Fan Hui thought the move was rather odd. But then he saw its beauty.

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.

The move in question was the 37th in the second game of the historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo, an artificially intelligent computing system built by researchers at Google. Inside the towering Four Seasons hotel in downtown Seoul, the game was approaching the end of its first hour when AlphaGo instructed its human assistant to place a black stone in a largely open area on the right-hand side of the 19-by-19 grid that defines this ancient game. And just about everyone was shocked.

“That’s a very strange move,” said one of the match’s English language commentators, who is himself a very talented Go player. Then the other chuckled and said: “I thought it was a mistake.” But perhaps no one was more surprised than Lee Sedol, who stood up and left the match room. “He had to go wash his face or something—just to recover,” said the first commentator.

Even after Lee Sedol returned to the table, he didn’t quite know what to do, spending nearly 15 minutes considering his next play. AlphaGo’s move didn’t seem to connect with what had come before. In essence, the machine was abandoning a group of stones on the lower half of the board to make a play in a different area.

AlphaGo placed its black stone just beneath a single white stone played earlier by Lee Sedol, and though the move may have made sense in another situation, it was completely unexpected in that particular place at that particular time—a surprise all the more remarkable when you consider that people have been playing Go for more than 2,500 years.

The commentators couldn’t even begin to evaluate the merits of the move. – Cade Metz

* Now imagine the same ultra-competency being developed in other endeavors like medicine, law, scientific research, and war. – Veteran4Peace


Sunday 13th March 2016

AI Can Put Us on the Most Optimal Paths to Solving Problems

Just like this game of Go, AI will eventually start ‘thinking in ways we never conceived’ about many things like curing diseases.

There could be 50 different promising paths to curing cancer for instance but only enough funding and scientists to tackle the first 5 we think of, even if the ultimate cure is on one of the other 45.

What a fascinating idea it is that perhaps AI will think of every path and always put us on the most optimal ones. – iushciuweiush


Sunday 13th March 2016

What Problems Can Humans and Machines Overcome Together?

The AI research community has made incredible progress in five years.

A key insight has been that it’s much better to let computers figure out how to accomplish goals and improve through experience, rather than handcrafting instructions for every individual task. That’s also the secret to AlphaGo’s success.

The real challenges in the world are not “human versus machine,” but humans and whatever tools we can muster versus the intractable and complex problems that surround us. The most important struggles already have thousands of brilliant and dedicated people making progress on issues that affect every one of us.

Technologies such as AI will enhance our ability to respond to these pressing global challenges by providing powerful tools to aid experts make faster breakthroughs. We need machine learning to help us tame complexity, predict the unpredictable and support us as we achieve the previously impossible.

As our tools get smarter and more versatile, it’s incumbent upon us to start thinking much more ambitiously and creatively about solutions to society’s toughest global challenges. We need to reject the notion that some problems are just intractable. We can aim higher.

Consider what the world’s best clinicians or educators could achieve with machine learning tools assisting them. The real test isn’t whether a machine can defeat a human, but what problems humans and machines can overcome together. – Sundar Pichai and Demis Hassabis


Sunday 13th March 2016

AI is Coming

Artificial Intelligence keeps progressing, no matter whether you know about (or like it) or not.

First an AI application is typically seen as a curiosity. Then it becomes a tool you need to learn how to use. And eventually it will develop to the point where it could take your job.

IBM’s Watson easily beat the world’s best Jeopardy masters several years ago. Since then it has become the world’s foremost oncology expert.

Currently Watson is on its way to start replacing swathes of paralegals at law firms as well as finding new oil reserves.

A few years down the road, anyone with a cellphone (or AugReal contact lens) will be able to tap into Watson-like powers for any kind of search or research. – Mikael Syding


Sunday 13th March 2016

Anticipated Top AI Breakthroughs: 2016 – 2018

At A360 this year, my expert on AI was Stephen Gold, the CMO and VP of Business Development and Partner Programs at IBM Watson.

AI Progress of late is furious — an R&D arms race is underway among the world’s top technology giants.

Soon AI will become the most important human collaboration tool ever created, amplifying our abilities and providing a simple user interface to all exponential technologies. Ultimately, it’s helping us speed toward a world of abundance.

The implications of true AI are staggering,

“It’s amazing,” said Gold. “For 50 years, we’ve ideated about this idea of artificial intelligence. But it’s only been in the last few years that we’ve seen a fundamental transformation in this technology.”

Here are Gold’s predictions for the most exciting, disruptive developments coming in AI in the next three years. As entrepreneurs and investors, these are the areas you should be focusing on, as the business opportunities are tremendous.

1 – Next-gen A.I. systems will beat the Turing Test

Alan Turing created the Turing Test over half a century ago as a way to determine a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human.

Loosely, if an artificial system passed the Turing Test, it could be considered “AI.”

Gold believes, “that for all practical purposes, these systems will pass the Turing Test” in the next three-year period.

Perhaps more importantly, if it does, this event will accelerate the conversation about the proper use of these technologies and their applications.

2 – Leverage ALL health data (genomic, phenotypic, social) to redefine the practice of medicine.

“I think AI’s effect on healthcare will be far more pervasive and far quicker than anyone anticipates,” says Gold. “Even today, AI/machine learning is being used in oncology to identify optimal treatment patterns.”

But it goes far beyond this. AI is being used to match clinical trials with patients, drive robotic surgeons, read radiological findings and analyze genomic sequences.

3 – AI will be woven into the very fabric of our lives — physically and virtually.

Ultimately, during the AI revolution taking place in the next three years, AIs will be integrated into everything around us, combining sensors and networks and making all systems “smart.”

AIs will push forward the ideas of transparency, of seamless interaction with devices and information, making everything personalized and easy to use. We’ll be able to harness that sensor data and put it into an actionable form, at the moment when we need to make a decision.

Peter Diamandis


Tuesday 29th March 2016

Dubai to Host Futuristic Olympics in Dec 2017

Following the World Drone Prix in Dubai this weekend, which saw a 15-year-old British kid take home the top $250,000 prize, the gulf state has unveiled plans to hold a futuristic Olympics every two years starting in 2017.

It is billed to feature nine competitions, including driverless car racing, robotic soccer, robotic running, manned drones racing, robotic swimming, robotic table tennis, robotic wrestling, drone races and a cybathlon event for bionic athletes. – Kirsty Styles


Tuesday 29th March 2016

$3bill of Artificial Intelligence R&D Planned in South Korea 

After what has been dubbed the ‘AlphaGo shock’, South Korea is getting serious about artificial intelligence

South Korea, well known for its IT infrastructure, is promising 3.5 trillion won ($3 billion) in funding from the public and private sectors to develop artificial intelligence for corporate and university AI projects.

South Korea’s President Park Geun-hye assembled leaders across the country’s tech industry and senior government officials in Seoul last week to announce plans to invest the amount over the next five years.

It appears to be largely a reaction to the phenomenal performance of Google’s algorithm AlphaGo in an historic AI-versus-human game in Seoul earlier this month, which captured the South Korean media’s imagination.

“Above all, Korean society is ironically lucky, that thanks to the ‘AlphaGo shock’ we have learned the importance of AI before it is too late,” the president told local reporters assembled for the meeting, describing the game as a watershed moment of an imminent “fourth industrial revolution”.

South Korea will establish a new high-profile, public/private research centre with participation from several Korean conglomerates, including Samsung, LG, telecom giant KT, SK Telecom, Hyundai Motor, and internet portal Naver.

The institute was reportedly already in the works, but AlphaGo’s domination quickened the process of setting up the grouping. Some Korean media reports indicate that the institute could open its doors as early as 2017.

South Korea already funds two high-profile AI projects — Exobrain, which is intended to compete with IBM’s Watson computer, and Deep View, a computer vision project. – Philip Iglauer


Tuesday 29th March 2016

Things Are Progressing So Amazingly Fast

If I compare AI to what it was like when I first learned about it in 1971 or 1972 when I was a kid, it’s astounding what we can do now.

Self-driving cars on the streets, every game no matter how hard has master level AI players, major funds are trading billions of dollars using AIs, diseases diagnosed by editing genomes according to patterns determined by AIs. It’s incredible. I look at the news everyday and it reads like science fiction did when I was a kid. – Ben Goertzel


Tuesday 29th March 2016

“A Hundred Years Before A Computer Beats Humans at Go”

Go fans proudly note, a computer has not come close to mastering what remains a uniquely human game.

To play a decent game of Go, a computer must be endowed with the ability to recognize subtle, complex patterns and to draw on the kind of intuitive knowledge that is the hallmark of human intelligence.

”It may be a hundred years before a computer beats humans at Go — maybe even longer,” said Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study in Princeton, N.J., and a fan of the game. ”If a reasonably intelligent person learned to play Go, in a few months he could beat all existing computer programs. You don’t have to be a Kasparov.”

When or if a computer defeats a human Go champion, it will be a sign that artificial intelligence is truly beginning to become as good as the real thing. – The New York Times, July 1997


Tuesday 29th March 2016

In Two Moves, AlphaGo and Lee Sedol Redefined the Future

In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence.

But in Game Four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as the move from the Google machine—no less and no more. It showed that although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own transcendent moments. And it seems that in the years to come, as we humans work with these machines, our genius will only grow in tandem with our creations.

Move 37


With the 37th move in the match’s second game, AlphaGo landed a surprise on the right-hand side of the 19-by-19 board that flummoxed even the world’s best Go players, including Lee Sedol. “That’s a very strange move,” said one commentator, himself a nine dan Go player, the highest rank there is. “I thought it was a mistake,” said the other.

Lee Sedol, after leaving the match room, took nearly fifteen minutes to formulate a response. Fan Gui—the three-time European Go champion who played AlphaGo during a closed-door match in October, losing five games to none—reacted with incredulity. But then, drawing on his experience with AlphaGo—he has played the machine time and again in the five months since October—Fan Hui saw the beauty in this rather unusual move.

AlphaGo had calculated that there was a one-in-ten-thousand chance that a human would make that move. But when it drew on all the knowledge it had accumulated by playing itself so many times—and looked ahead in the future of the game—it decided to make the move anyway. And the move was genius.

Move 78


In Game Four, Lee Sedol was intent on regaining some pride for himself and the tens of millions who watched the match across the globe. But midway through the game, the Korean’s prospects didn’t look good. “Lee Sedol needs to do something special,” said one commentator. “Otherwise, it’s just not going to be enough.” But after considering his next move for a good 30 minutes, he delivered something special. It was Move 78, a “wedge” play in the middle of the board, and it immediately turned the game around.

As we found out after the game, AlphaGo made a disastrous play with its very next move, and just minutes later, after analyzing the board position, the machine determined that its chances of winning had suddenly fallen off a cliff.

Commentator and nine dan Go player Michael Redmond called Lee Sedol’s move brilliant: “It took me by surprise. I’m sure that it would take most opponents by surprise. I think it took AlphaGo by surprise.”

Among Go players, the move was dubbed “God’s Touch.” It was high praise indeed. But then the higher praise came from AlphaGo.

One in Ten Thousand – Again

The next morning, as he walked down the main boulevard in Sejong Daero just down the street from the Four Seasons, I discussed the move with Demis Hassabis, who oversees the DeepMind Lab and was very much the face of AlphaGo during the seven-day match. As we walked, the passers-by treated him like a celebrity—and indeed he was, after appearing in countless newspapers and on so many TV news shows. Here in Korea, where more than 8 million people play the game of Go, Lee Sedol is a national figure.

Hassabis told me that AlphaGo was unprepared for Lee Sedol’s Move 78 because it didn’t think that a human would ever play it. Drawing on its months and months of training, it decided there was a one-in-ten-thousand chance of that happening. In the other words: exactly the same tiny chance that a human would have played AlphaGo’s Move 37 in Game Two.

The symmetry of these two moves is more beautiful than anything else. One-in-ten-thousand and one-in-ten-thousand. This is what we should all take away from these astounding seven days. Hassabis and Silver and their fellow researchers have built a machine capable of something super-human. But think about what happens when you put these two things together. Human and machine. Fan Hui will tell you that after five months of playing match after match with AlphaGo, he sees the game completely differently. His world ranking has skyrocketed. And apparently, Lee Sedol feels the same way. Hassabis says that he and the Korean met after Game Four, and that Lee Sedol echoed the words of Fan Hui. Just these few matches with AlphaGo, the Korean told Hassabis, have opened his eyes.

This isn’t human versus machine. It’s human and machine. Move 37 was beyond what any of us could fathom. But then came Move 78. And we have to ask: If Lee Sedol hadn’t played those first three games against AlphaGo, would he have found God’s Touch? The machine that defeated him had also helped him find the way. – Cade Metz


Tuesday 29th March 2016

Quantum Mechanics is So Weird That Scientists Need AI to Choose the Experiments

Researchers at the University of Vienna have created an algorithm that helps plan experiments in this mind-boggling field.

Quantum mechanics is one of the weirdest fields in science. Even physicists find it tough to wrap their heads around it. As Michael Merrifeld of the University of Nottingham says, “If it doesn’t confuse you, that really just tells you that you haven’t understood it.”

This makes designing experiments very tricky. However, these experiments are vital if we want to develop quantum computing and cryptography. So a team of researchers decided, since the human mind has such a hard time with quantum science, that maybe a “brain” without human preconceptions would be better at designing the experiments.

Melvin, an algorithm designed by Anton Zeilinger and his team at the University of Vienna, has proven this to be the case. The research has been published in the journal Physical Review Letters.

The concept was dreamed up by doctoral student Mario Krenn, who was trying to design a particular experiment by putting together lasers and mirrors, in such a way that would lead to a specific quantum state.

This experiment designed by Melvin produces entangled photons (Mario Krenn/University of Vienna.)

Melvin works by taking the building blocks of a quantum experiment (the aforementioned lasers and mirrors) and the quantum state desired as an outcome and running through different setups at random. If the random setup results in the desired outcome, Melvin will simplify it. It can also learn from experience, remembering which configurations result in which outcomes, so it can use those and build on them as needed.

So far, the team says, it has devised experiments that humans were unlikely to have conceived. Some that work in ways that are difficult to understand. They look very different from human-devised experiments.

“I still find it quite difficult to understand intuitively what exactly is going on,” said Krenn. – Michelle Starr


Tuesday 29th March 2016

Incredible Things Are Happening

Each year is more exciting than the last in the research areas I’m involved in. AI, Robotics, Longevity, Biology… all over the place we’re just seeing new things happening year after year.

The number of breakthrough reports in longevity research in the last few months is incredible. This last year everyone is using CRISPR all of a sudden for gene editing. We see that you’re now able to make mice live much longer than they otherwise would have simply by making them flush out old senescent cells. There’s incredible things happening all around. – Ben Goertzel


Sunday 24th April 2016

AI Hits the Mainstream

Insurance, finance, manufacturing, oil and gas, auto manufacturing, health care: these may not be the industries that first spring to mind when you think of artificial intelligence. But as technology companies like Google and Baidu build labs and pioneer advances in the field, a broader group of industries are beginning to investigate how AI can work for them, too.

Today the industry selling AI software and services remains a small one. Dave Schubmehl, research director at IDC, calculates that sales for all companies selling cognitive software platforms —excluding companies like Google and Facebook, which do research for their own use—added up to $1 billion last year.

He predicts that by 2020 that number will exceed $10 billion. Other than a few large players like IBM and Palantir Technologies, AI remains a market of startups: 2,600 companies, by Bloomberg’s count.

General Electric is using AI to improve service on its highly engineered jet engines. By combining a form of AI called computer vision (originally developed to categorize movies and TV footage when GE owned NBC Universal) with CAD drawings and data from cameras and infrared detectors, GE has improved its detection of cracks and other problems in airplane engine blades.

The system eliminates errors common to traditional human reviews, such as a dip in detections on Fridays and Mondays, but also relies on human experts to confirm its alerts. The program then learns from that feedback, says Colin Parris, GE’s vice president of software research. – Nanette Byrnes


Sunday 24th April 2016

Neural Networks: Why AI Development is Going to Get Even Faster

The pace of development of artificial intelligence is going to get faster. And not for the typical reasons — More money, interest from megacompanies, faster computers, cheap & huge data, and so on. Now it’s about to accelerate because other fields are starting to mesh with it, letting insights from one feed into the other, and vice versa.

Neural networks are drawing sustained attention from researchers across the academic spectrum.  “Pretty much any researcher who has been to the NIPS Conference [a big AI conference] is beginning to evaluate neural networks for their application,” says Reza Zadeh, a consulting professor at Stanford. That’s going to have a number of weird effects.

People like neural networks because they basically let you chop out a bunch of hand-written code in favor of feeding inputs and outputs into neural nets and getting computers to come up with the stuff in-between. In technical terms, they infer functions.

Robotics has just started to get into neural networks. This has already sped up development. This year, Google demonstrated a system that teaches robotic arms to learn how to pick up objects of any size and shape. That work was driven by research conducted last year at Pieter Abbeel’s lab in Berkeley, which saw scientists combine two neural network-based techniques (reinforcement learning and deep learning) with robotics to create machines that could learn faster.

More distant communities have already adapted the technology to their own needs. Brendan Frey runs a company called Deep Genomics, which uses machine learning to analyze the genome. Part of the motivation for that is that humans are “very bad” at interpreting the genome, he says. Modern machine learning approaches give us a way to get computers to analyze this type of mind-bending data for us. “We must turn to truly superhuman artificial intelligence to overcome our limitations,” he says.

One of the reasons why so many academics from so many different disciplines are getting involved is that deep learning, though complex, is surprisingly adaptable. “Everybody who tries something seems to get things to work beyond what they expected,” says Pieter Abbeel. “Usually it’s the other way around.”

Oriol Vinyals, who came up with some of the technology that sits inside Google Inbox’s ‘Smart Reply‘ feature, developed a neural network-based algorithm to plot the shortest routes between various points on a map. “In a rather magical moment, we realized it worked,” he says. This generality not only encourages more experimentation but speeds up the development loop as well.

One challenge: though neural networks generalize very well, we still lack a decent theory to describe them, so much of the field proceeds by intuition. This is both cool and extremely bad. “It’s amazing to me that these very vague, intuitive arguments turned out to correspond to what is actually happening,” says Ilya Sutskever, research director at OpenAI., of the move to create ever-deeper neural network architectures. Work needs to be done here. “Theory often follows experiment in machine learning,” says Yoshua Bengio, one of the founders of the field.

My personal intuition is that deep learning is going to make its way into an ever-expanding number of domains. Given sufficiently large datasets, powerful computers, and the interest of subject-area experts, the Deep Learning tsunami, looks set to wash over an ever-larger number of disciplines. – Jack Clark


Sunday 24th April 2016

A $2 Billion Chip to Accelerate Artificial Intelligence

Two years ago we were talking to 100 companies interested in using deep learning. This year we’re supporting 3,500. In two years there has been 35X growth. – Jen-Hsun Huang, CEO of Nvidia

The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.

Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning. This technique has produced recent major advances such as the Google software AlphaGo that defeated the world’s top Go player last month.

Deep learning involves passing data through large collections of crudely simulated neurons. The P100 could help deliver more breakthroughs by making it possible for computer scientists to feed more data to their artificial neural networks or to create larger collections of virtual neurons.

Artificial neural networks have been around for decades, but deep learning only became relevant in the last five years, after researchers figured out that chips originally designed to handle video-game graphics made the technique much more powerful. Graphics processors remain crucial for deep learning, but Nvidia CEO Jen-Hsun Huang says that it is now time to make chips customized for this use case.

At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang.

It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia’s previous best chip.

Deep-learning researchers from Facebook, Microsoft, and other companies that Nvidia granted early access to the new chip said they expect it to accelerate their progress by allowing them to work with larger collections of neurons.

“I think we’re going to be able to go quite a bit larger than we have been able to in the past, like 30 times bigger,” said Bryan Catanzero, who works on deep learning at the Chinese search company Baidu. Increasing the size of neural networks has previously enabled major jumps in the smartness of software. For example, last year Microsoft managed to make software that beats humans at recognizing objects in photos by creating a much larger neural network.

Huang of Nvidia said that the new chip is already in production and that he expects cloud-computing companies to start using it this year. IBM, Dell, and HP are expected to sell it inside servers starting next year. – Tom Simonite


Sunday 24th April 2016

What Developments Can We Expect to See in Deep Learning in the Next 5 Years?

It will increasingly change what we think of as magic. – Steve Jurvetson

“Recent advances in deep learning technology will help us solve many of the world’s most complex challenges,” said Steve Jurvetson, Partner at DFJ.

Broad application of deep learning into all aspects of our lives will happen in the next 5 years. The way we access healthcare, shop, farm, or interact with other people will all be shaped by learning machines.

Deep learning will allow us to better and more efficiently use resources and drive down the cost of services. In addition, our experiences with machines will be personalized as websites and devices adapt to our individual preferences. – Naveen Rao


Monday 23rd May 2016

Nvidia Smashes Q1 Expectations On ‘Sweeping’ AI Adoption

Nvidia (NVDA) rocketed after the maker of graphics chips beat Q1 sales expectations and topped earnings views, led by faster adoption of artificial intelligence technology that utilizes Nvidia graphics chips.

CEO Jen-Hsun Huang credited accelerated growth of deep-learning, or AI, technology for the Q1 beat.

“Accelerating our growth is deep learning, a new computing model that uses the GPU’s (graphics processing unit) massive computing power to learn artificial intelligence algorithms,” he said in the company’s earnings release. “Its adoption is sweeping one industry after another, driving demand for our GPUs.”

Nvidia’s soon-to-be-released Pascal chip will continue that drive, he said.

“Our new Pascal GPU (graphics processing unit) architecture will give a giant boost to deep learning, gaming and VR (virtual reality),” he said. “Pascal processors are in full production and will be available later this month.” – Allison Gatlin


Monday 23rd May 2016

European Commission’s Billion Euro Bet on Quantum Computing

A new €1 billion ($1.13 billion) project has been announced by the European Commission aimed at developing quantum technologies over the next 10 years and placing Europe at the forefront of “the second quantum revolution.”

Quantum computers have been hailed for their revolutionary potential in everything from space exploration to cancer treatment.

The Quantum Flagship announced will be similar in size, time scale and ambition as the EC’s other ongoing Flagship projects: the Graphene Flagship and the Human Brain Project. As well as quantum computers, the initiative will aim to address other aspects of quantum technologies, including quantum secure communication, quantum sensing and quantum simulation.

Since they were first theorized by the physicist Richard Feynman in 1982, quantum computers have promised to bring about a new era of ultra-powerful computing. One of the field’s pioneers, physicist David Deutsch, famously claimed that quantum computers hold the potential to solve problems that would take a classical computer longer than the age of the universe.

quantum computing computer d-wave EC

One of the main hopes of the initiative is that quantum technologies will make the leap from research labs to commercial and industrial applications. Matthias Troyer, a computational physics professor at the Institute for Theoretical Physics at ETH Zurich—one of the institutes set to benefit from the fund—believes the initiative acknowledges the fact that this step is now ready to be made.

“Quantum technologies have matured to the point where we are ready to transition from academic projects to the development of competitive commercial products that within the next decade will be able to perform tasks that classical devices are incapable of,” Troyer tells Newsweek.

This is a sentiment shared by Ilyas Khan, co-founder and CEO at Cambridge Quantum Computing (CQC). Khan claims that the Quantum Flagship puts Europe at the front of the race to build the world’s first quantum machines.

CQC has been one of the pioneers in early quantum computer research and in 2015 developed the first operating system capable of accurately simulating a quantum processor. The t|ket> operating system allows research teams to determine the type of operations a quantum computer can perform.

“It has become increasingly clear that it is now only a matter of a relatively short time before quantum technologies become of practical importance at the strategic level for governments and large corporations,” Khan says. – Anthony Cuthbertson


Monday 23rd May 2016


  • The AI revolution is the most profound transformation human civilization will experience in all of history. – Ray Kurzweil

Rise of the Robots is Sparking an Investment Boom

  • Global influx of machines set to open one of the hottest new markets in tech

An army of robots is on the move.

In warehouses, hospitals and retail stores, and on city streets, industrial parks and the footpaths of college campuses, the first representatives of this new invading force are starting to become apparent.

“The robots are among us,” says Steve Jurvetson, a Silicon Valley investor and a director at Elon Musk’s Tesla and SpaceX companies, which have relied heavily on robotics.

A multitude of machines will follow, he says: “A lot of people are going to come in contact with robots in the next two to five years.”

The machines are starting to roll or walk out of the labs. In the process, they are about to tip off a financing boom as robotics — and artificial intelligence — becomes one of the hottest new markets in tech.

A boom is taking place in Asia, with Japan and China, which is in the early stages of retooling its manufacturing sector, accounting for 69 per cent of all robot spending.

“There is an exponential pace of improvement in hardware and machine learning algorithms,” says co-founder of Dispatch, a Silicon Valley company that is testing an autonomous delivery vehicle, Uriah Baalke. The result is a new class of machines that can operate by themselves in human space.

The technology advances behind this wave of innovation have come together remarkably quickly. Funding over the past five years by Darpa, the research arm of the US defence department, has brought breakthroughs in mechanical areas such as robotic limbs, says SRI International’s Mr Kothari.

But the biggest advances have come in software. Improvements in computer vision, for instance, have made possible many companies like Dispatch, whose machines rely on being able to “see” the world around them, says Chris Dixon, a partner at venture capital firm Andreessen Horowitz.

Machine learning algorithms, which are designed to adapt through an endless process of trial and error, play the biggest part in teaching robots how to navigate a world beyond the normal rules-based systems that computers are designed to handle.

“You won’t have to programmatically tell it what to do; it will figure it out,” says Vinod Khosla, a venture capitalist who has backed robot companies in markets including agriculture and healthcare. “Today, it’s really dumb intelligence — but that will change quickly.” – Richard Waters


Monday 23rd May 2016

Artificial Intelligence Better Than Humans at Cancer Detection

  • Machines are now better than humans at detecting cancer both in pictures and in free text documents. What’s next? – Sprezzaturian

Researchers from the Regenstrief Institute and Indiana University School of Informatics and Computing say they’ve found that open-source machine learning tools are as good as — or better than — humans in extracting crucial meaning from free-text (unstructured) pathology reports and detecting cancer cases.

The computer tools are also faster and less resource-intensive.

“We think that its no longer necessary for humans to spend time reviewing text reports to determine if cancer is present or not,” said study senior author Shaun Grannis*, M.D., M.S., interim director of the Regenstrief Center of Biomedical Informatics.

“We have come to the point in time that technology can handle this. A human’s time is better spent helping other humans by providing them with better clinical care. Everything — physician practices, health care systems, health information exchanges, insurers, as well as public health departments — are awash in oceans of data. How can we hope to make sense of this deluge of data? Humans can’t do it — but computers can.”

“This is a major infrastructure advance — we have the technology, we have the data, we have the software from which we saw accurate, rapid review of vast amounts of data without human oversight or supervision.” – Kurzweil AI


Monday 23rd May 2016

Nvidia Bringing Artificial Intelligence to America’s Best Hospital

MGH Clinical Data Science Center will train a deep neural network using Mass General's vast stores of phenotypic, genetics and imaging data. The hospital has a database containing some 10 billion medical images. To process this massive amount of data, the center will deploy a server designed for AI applications and deep learning algorithms.

In order to advance healthcare by applying the latest artificial intelligence techniques to improve the detection, diagnosis, treatment and management of diseases, NVIDIA announced that it is a founding technology partner of the Massachusetts General Hospital (MGH) Clinical Data Science Center.

MGH — which conducts the largest hospital-based research program in the United States, and is the top-ranked hospital on this year’s US News and World Report “Best Hospitals” list — recently established the MGH Clinical Data Science Center in Boston.

The center will train a deep neural network using Mass General’s vast stores of phenotypic, genetics and imaging data. The hospital has a database containing some 10 billion medical images.

To process this massive amount of data, the center will deploy the NVIDIA DGX-1 — a server designed for AI applications, launched recently at the GPU Technology Conference — and deep learning algorithms created by NVIDIA engineers and Mass General data scientists.

“Deep learning is revolutionizing a wide range of scientific fields,” said Jen-Hsun Huang, CEO and co-founder, NVIDIA. “There could be no more important application of this new capability than improving patient care. This work will one day benefit millions of people by extending the capabilities of physicians with an incredibly powerful new tool.”

Using AI, physicians can compare a patient’s symptoms, tests and history with insight from a vast population of other patients. Initially, the MGH Clinical Data Science Center will focus on the fields of radiology and pathology — which are particularly rich in images and data — and then expand into genomics and electronic health records. – Nvidia


Saturday 18th June 2016

Bill Gates on AI: “The Dream Is Finally Arriving. This Is What It Was All Leading Up To.”

After years of working on the building blocks of speech recognition and computer vision, Gates said enough progress has been made to ensure that in the next 10 years there will be robots to do tasks like driving and warehouse work as well as machines that can outpace humans in certain areas of knowledge.

“The dream is finally arriving,” Gates said, speaking with wife Melinda Gates. “This is what it was all leading up to.”

He suggested a pair of books that people should read, including Nick Bostrom’s book on Superintelligence and Pedro Domingos’ “The Master Algorithm.”

Melinda Gates noted that you can tell a lot about where her husband’s interest is by the books he has been reading. “There have been a lot of AI books,” she said. – Ina Fried


Saturday 18th June 2016

Artificial Intelligence ‘Outsmarts Cancer’

Breast cancer

Early trial data shows a drug developed using artificial intelligence can slow the growth of cancer in clinical trials.

The data, presented at the American Society of Clinical Oncology conference, showed some tumours shrank by around a quarter. The compound will now be taken into more advanced trials.

Scientists said we were now in an explosive stage of merging advances in computing with medicine.

Spotting every difference between a cancerous and a healthy cell is beyond even the brightest human minds. So the US biotechnology company Berg has been feeding as much data as its scientists could measure on the biochemistry of cells into a supercomputer.

The aim was to let an artificial intelligence suggest a way of switching a cancerous cell back to a healthy one. It led to their first drug, named BPM31510, which tries to reverse the Warburg effect – the phenomenon in which cancerous cells change their energy supply.

Innovative treatments

The results from patients are being fed back into the artificial intelligence in order to further target the therapy at those most likely to respond.

The company thinks cancers with high energy demands will benefit the most and is planning a more advanced trial in patients in pancreatic cancer.

Dr Alan Worsley, from Cancer Research UK, said we were only at the beginning of harnessing the huge advances in computing to understand cancer. – James Gallagher


Saturday 18th June 2016

Tech Moguls Declare Era of Artificial Intelligence

Amazon CEO Jeff Bezos predicted a profound impact on society over the next 20 years.

“It’s really early but I think we’re on the edge of a golden era. It’s going to be so exciting to see what happens,” he said.

Amazon has been working on artificial intelligence for at least four years and now has 1,000 employees working on Alexa, the company’s voice-based smart assistant software system, he said.

Big tech companies including Amazon have an edge at present because they have access to large amounts of data but hundreds of AI startups will hatch in the next few years, he said.

IBM CEO Ginni Rometty said the company has been working on artificial technology, which she calls a cognitive system, since 2005 when it started developing its Watson supercomputer.

“I would say in five years, there’s no doubt in my mind that cognitive AI will impact every decision made” from healthcare to education to financial services, Rometty said. – Liana B. Baker


Saturday 18th June 2016

Worst-Case Scenarios for Evil AI

  • Enslave humans
  • Seize control of resources
  • Damage planet

So, no worse than government.

Elaine Ou


Saturday 18th June 2016

The Singularity is Near

When Ray Kurzweil published The Singularity Is Near in 2006, many scoffed at his outlandish predictions.

A year before Apple launched its iPhone, Kurzweil imagined a world in which humans and computers essentially fuse, unlocking capabilities we normally see in science fiction movies.

He pointed out that as technology accelerates at an exponential rate, progress would eventually become virtually instantaneous—a singularity. Further, he predicted that as computers advanced, they would merge with other technologies, namely genomics, nanotechnology and robotics.

Today, Kurzweil’s ideas don’t seem quite so outlandish. Google’s DeepMind recently beat legendary Go world champion Lee Sedol. IBM’s Watson is expanding horizons in medicine, financial planning and even cooking. Self driving cars are expected to be on the road by 2020.

Just as Kurzweil predicted, technology seems to be accelerating faster than ever before. – Greg Satell


Saturday 18th June 2016

The Sixth Paradigm: We’re Going Beyond Moore’s Law 

Kurzweil has pointed out that microprocessors are in fact the fifth paradigm of information processing, replacing earlier technologies such as electromechanical relays, vacuum tubes and transistors.

He also argues that the numbers of transistors on a chip is a fairly arbitrary way to measure performance and suggests to look the number of calculations per $1000 instead.

And it turns out that he’s right. While the process of cramming more transistors on silicon wafers is indeed slowing down, we’re finding a variety of ways to speed up overall performance, such as quantum computing, neuromorphic chips and 3D stacking.

We can expect progress to continue accelerating, at least for the next few decades. – Greg Satell


Monday 8th August 2016

Google Turns on DeepMind AI, Cuts Cooling Energy Bill by 40%

  • This alone probably pays for the Deepmind acquisition. Shows how far below Pareto optimal limits even Google was. – Balaji S. Srinivasan
  • The electricity consumption of data centers is on a pace to account for 12 percent of global electricity consumption by 2017. – Dan Heilman

Google Uses AI To Cool Data Centers and Slash Electricity Use

  • DeepMind controls about 120 variables in the data centers. The fans and the cooling systems and so on, and windows and other things. They were pretty astounded. – Demis Hassabis, DeepMind Co-Founder
  • I don’t think most grasp the significance of this. Oil companies have similar systems they pay billions trying to optimize. – iandanforth
  • It’s all about optimization. It can be used in supply logistics, shipping logistics and dynamic pricing in addition to keeping an industrial area at the right temperature. We’ll be seeing AI being applied to a lot more areas. – Dave Schubmehl
  • Honestly, I’m skeptical a generalized AI will go fully conscious in my lifetime. But these specialized AI? These things are going to start changing our lives over the next ten years in unimaginable ways. The energy savings alone is incredible. – tendimensions

The amount of energy consumed by big data centers has always been a headache for tech companies.

Keeping the servers cool as they crunch numbers is such a challenge that Facebook even built one of its facilities on the edge of the Arctic Circle.

Well, Google has a different solution to this problem: putting its DeepMind artificial intelligence unit in charge and using AI to manage power usage in parts of its data centers.

The results of this experiment? A 40 percent reduction in the amount of electricity needed for cooling, which Google describes as a “phenomenal step forward.”

A typical day of testing, including when machine learning recommendations were turned on, and when they were turned off.

The AI worked out the most efficient methods of cooling by analyzing data from sensors among the server racks, including information on things like temperatures and pump speeds.

DeepMind’s engineers say the next step is identify where new data is needed to calculate further efficiencies, and to deploy sensors in those areas.

And the company won’t stop with Google’s data centers. “Because the algorithm is a general-purpose framework to understand complex dynamics, we plan to apply this to other challenges in the data centre environment and beyond in the coming months,” said DeepMind in a blog post.

“Possible applications of this technology include improving power plant conversion efficiency […], reducing semiconductor manufacturing energy and water usage, or helping manufacturing facilities increase throughput.” – James Vincent


Monday 8th August 2016

Simple OpenCog Inference via Hanson Robot

We’re undertaking a serious effort to build a thinking machine.

No challenge today is more important than creating Artificial General Intelligence (AGI), with broad capabilities at the human level and ultimately beyond.

OpenCog is an open-source software project aimed at directly confronting the AGI challenge, using mathematical and biological inspiration and professional software engineering techniques.

OpenCog is an ambitious project with many challenges. We are however confident that our design and software is capable of human preschool-level intelligence after years — not decades — of sustained effort along our roadmap.  After that, progress will become increasingly rapid. –

This demo showcases aspects of some recently completed systems integration — the Hanson Robotics HEAD software framework controlling robot Han Hanson, and OpenCog’s Probabilistic Logic Networks inference engine connected to OpenCog’s English language comprehension and generation pipelines. – Ben Goertzel


Thursday 18th August 2016


$13 trillion of negative yielding debt and Amazon is selling a tablet for $33.

I was promised a hyperinflationary apocalypse.

Morgan Housel


Thursday 18th August 2016

The AI Gold Rush

Companies are lining up to supply shovels to participants in the AI gold rush.

The name that comes up most frequently is NVIDIA (NASDAQ: NVDA), says Chris Dixon of Andreessen Horowitz; every AI startup seems to be using its GPU chips to train neural networks.

GPU capacity can also be rented in the cloud from Amazon (NASDAQ: AMZN) and Microsoft (NASDAQ: MSFT).

IBM (NYSE: IBM) and Google, meanwhile, are devising new chips specifically built to run AI software more quickly and efficiently.

And Google, Microsoft and IBM are making AI services such as speech recognition, sentence parsing and image analysis freely available online, allowing startups to combine such building blocks to form new AI products and services.

More than 300 companies from a range of industries have already built AI-powered apps using IBM’s Watson platform, says Guru Banavar of IBM, doing everything from filtering job candidates to picking wines. – The Economist


Thursday 18th August 2016

Most Active Investors in Artificial Intelligence

1 – Intel (NASDAQ:INTC)

2 – Google (NASDAQ: GOOGL)

3 – GE (NYSE: GE)

4 – Samsung (005930.KS)

Artificial intelligence dealmaking has exploded recently, leaping to a new quarterly record of over 140 deals in Q1’16.

1 – Intel Capital is the most active corporate investor on our list, having backed  over a dozen separate unique AI-based companies, including healthcare startup Lumiata, machine-learning platform DataRobot, and imaging startup Perfant Technology.

2 – Google Ventures, which backed over 10 unique companies, ranked second as an active investor in AI. Google is also a major acquirer of AI startups.

CB Insights


Thursday 18th August 2016

Deep Learning

Silicon Valley’s financiers and entrepreneurs are digging into artificial intelligence with remarkable exuberance.

The new A.I. era has spurred a rush for talent in A.I. that has become intense.

“The number of people trying to get the students to drop out of the class halfway through because now they know a little bit of this stuff is crazy” said Richard Socher who teaches a course on a machine intelligence technique known as deep learning. – John Markoff


Thursday 18th August 2016

Artificial Intelligence: BabyX

  • “I was not expecting this for another 10 years” – Nathanael Ries

BabyX is a project for the creation of a virtual animated baby that learns and reacts like a human baby.

It uses the computer’s cameras for “seeing” and microphones to “listen” as the inputs.

The computer uses Artificial intelligence algorithms for BabyX’s “learning” and interpretation of the inputs (voice and image) to understand the situation.

The result is a virtual toddler that can learn to read, recognize objects and “understand.”

The output is the baby’s face that can “speak” and express its mood by facial expressions (such as smile and show embarrassment). – Wikipedia


Thursday 18th August 2016

The Singularity is Coming

Masayoshi Son (the 2nd richest person in Japan) cashed in on his investments this year, resulting in the sale of $18.6bn worth of shares in Alibaba and Supercell. In addition to those proceeds, SoftBank has $25bn in cash.

The asset sales have intensified the guessing game around the likely target of any new gamble by the mercurial chief executive.

“The next big investment could be in artificial intelligence and robotics,” Shigeyuki Kishida, a consultant at InfoCom Research, says. “AI is expected to penetrate various industries and Mr Son wants to create an underlying platform to support those industries.”

“I still have unfinished business regarding the Singularity,” Son told the Nikkei.

“There will come a time when the human race and super intelligence will coexist to create a richer and happier life.”

“That is what I want to devote my life to. I believe information revolution in the true sense has just begun. My work is not done yet.” – Leo Lewis, Kana Inagaki and Simon Mundy


Friday 30th September 2016

AI Saves Life By Identifying Disease When Humans Failed

  • Japanese doctors have, for the first time in history, used artificial intelligence to detect a type of leukemia.

Image result for ai healthcare

If you needed proof that the age of artificial intelligence is officially upon us, well, look no farther.

Reports assert that IBM’s artificial intelligence (AI) system, Watson, just saved the life of a Japanese woman by correctly identifying her disease. This is notable because, for some time, her illness went undetected using conventional methods, and doctors were stumped.

The AI’s positive identification allowed doctors to develop a treatment for the woman in question, ultimately saving her life.

The key to this success is the AI’s ability to take a massive amount of data and analyze it quickly. This is something that human physicians, sadly, cannot do themselves (or at least, they can’t do it with nearly the accuracy or efficiency).

The system looked at the woman’s genetic information and compared it to 20 million clinical oncology studies. After doing so, it determined that the patient had an exceedingly rare form of leukemia.

Initially, the woman had been diagnosed with, and treated for, acute myeloid leukemia; however, she failed to respond to the traditional treatment methods, which perplexed doctors.

Notably, the AI was able to diagnose the condition in just 10 minutes. – Jolene Creighton


Wednesday 19th October 2016

Food Prices Plummet As Production Gets Much More Sophisticated

In a startling development, almost unheard of outside a recession, food prices have fallen for nine straight months in the U.S.

It’s the longest streak of food deflation since 1960 — with the exception of 2009, when the financial crisis was winding down.

“The severity of what we’re seeing is completely unprecedented,” said Scott Mushkin, an analyst at Wolfe Research who has studied grocery prices around the country for more than ten years.

“We’ve never seen deflation this sharp.” – Craig Giammona


Wednesday 19th October 2016

Venture Capitalist Marc Andreessen: AI Will Change the World

I was really skeptical at first. It’s not widely known, but there was an AI bubble in the 1980s where there were a whole bunch of venture-backed companies that got funded and they basically all blew up and torched all the capital.

We feel like we’re seeing something different now. The really big change was the ImageNet competition in 2012. In 2012, computers became better than people at recognizing objects in images. This is an actual competition where they’ve calibrated how to measure this. Part of the breakthrough on ImageNet was the sheer size of databases you could train the algorithm against.

Basically what we’ve seen in the last four years is breakthrough after breakthrough after breakthrough.

First was the breakthrough in recognizing objects in still images. There are corresponding breakthroughs happening right now in recognizing objects in videos — entirely new kinds of video classification. If you can do video recognition you can do realtime video, which means you can do autonomy. – Marc Andreessen


Wednesday 19th October 2016

Google’s DeepMind Achieves Speech-Generation Breakthrough

Google’s DeepMind unit, which is working to develop super-intelligent computers, has created a system for machine-generated speech that it says outperforms existing technology by 50 percent.

U.K.-based DeepMind, which Google acquired for about 400 million pounds ($533 million) in 2014, developed an artificial intelligence called WaveNet that can mimic human speech by learning how to form the individual sound waves a human voice creates, it said in a blog post Friday.

In blind tests for U.S. English and Mandarin Chinese, human listeners found WaveNet-generated speech sounded more natural than that created with any of Google’s existing text-to-speech programs, which are based on different technologies. WaveNet still underperformed recordings of actual human speech.

Tech companies are likely to pay close attention to DeepMind’s breakthrough. Speech is becoming an increasingly important way humans interact with everything from mobile phones to cars. Inc., Apple Inc., Microsoft Inc. and Alphabet Inc.’s Google have all invested in personal digital assistants that primarily interact with users through speech.

Mark Bennett, the international director of Google Play, which sells Android apps, told an Android developer conference in London last week that 20 percent of mobile searches using Google are made by voice, not written text.

And while researchers have made great strides in getting computers to understand spoken language, their ability to talk back in ways that seem fully human has lagged. – Jeremy Kahn


Wednesday 30th November 2016

Moore’s Law is Now in NVIDIA’s Hands

  • We’ll soon see the power of computing increase way more than any other period. There will be trillions of products with tiny neural networks inside. – Jen-Hsun Huang, CEO of NVIDIA

Intel has long ago ceded leadership for Moore’s Law. And so, understandably, they have trumpeted the end of Moore’s Law for many years. To me, it sounds a lot like Larry Ellison’s OpEd declaring the end of innovation in enterprise software, just before cloud computing and SaaS took off. In both cases, the giants missed the organic innovation bubbling up all around them.

For the past seven years, it has not been Intel but NVIDIA that has pushed the frontier of Moore’s processor performance/price curve.

For a 2016 data point, consider the NVIDIA Titan GTX. It offers 10^13 FLOPS per $1K (11 trillion calculations per second for $1,200 list price), and is the workhorse for deep learning and scientific supercomputing today. And they are sampling much more powerful systems that should be shipping soon. The fine-grained parallel compute architecture of a GPU maps better to the needs of deep learning than a CPU.

There is a poetic beauty to the computational similarity of a processor optimized for graphics processing and the computational needs of a sensory cortex, as commonly seen in neural networks today.

I was going to update the Kurzweil Curve (the meaningful version of Moore’s Law) to include the latest data points, and found that he was doing the same thing.

Here is the preliminary version. The 7 most recent data points are all NVIDIA, with CPU architectures dominating the prior 30 years:


It’s what gives us hope for the future and I think it’s the most important thing ever graphed.

Here is the prior version


Moore’s Law is now in NVIDIA’s hands. Consider the GTX Titan X for 2016. 11 TFLOPS for $1,200. That would be on the order of 10^13 for the far right side of the graph, perfectly on the line.

Steve Jurvetson


Wednesday 30th November 2016

Building an AI Portfolio

The following stocks offer exposure to Artificial Intelligence. – Lee Banfield



Stock Price: $776

Market Cap: $531 billion

Image result for google deepmind

Healthcare Images – Google Deepmind

Machine Learning – GoogleML

Autonomous Systems – Google Self-driving Car

Hardware – GoogleTPU

Open Source Library – TensorFlow


Stock Price: $162

Market Cap: $154 billion

Image result for ibm watson

Enterprise Intelligence – IBM Watson

Healthcare – IBM Watson Health


Stock Price: $752

Market Cap: $355 billion

Image result for amazon alexa logo

Personal Assistant – Amazon Alexa

Open Source Library – DSSTNE

Microsoft (NASDAQ: MSFT)

Stock Price: $60

Market Cap: $473 billion

Image result for cortana logo

Personal Assistant – Cortana

Open Source Libraries – CNTK, AzureML, DMTK


Stock Price: $94

Market Cap: $50 billion

Image result for nvidia


Samsung (SSNLF:US)

Stock Price: $1,250

Market Cap: $176 billion

Image result for viv ai

Personal Assistant – Viv

Qualcomm (NASDAQ: QCOM)

Stock Price: $68

Market Cap: $100 billion

Image result for qualcomm logo



Stock Price: $188

Market Cap: $29 billion

Image result for tesla logo

Autonomous Vehicles

Illumina (NASDAQ: ILMN)

Stock Price: $133

Market Cap: $20 billion

Image result for illumina grail

Healthcare, Cancer Detection – Grail

Mobileye (NYSE: MBLY)

Stock Price: $37

Market Cap: $8 billion

Image result for mobileye

Autonomous Vehicles


Wednesday 14th December 2016


Image result for weeklyglobalresearch

What’s the next job to be automated by AI?

Designing deep neural nets, apparently. And it only took 800 GPUs: Neural Architecture Search with Reinforcement Learning

Andrew Beam


Wednesday 14th December 2016

OpenAI Releases Universe, an Open Source Platform for Training AI

Related image

  • If Universe works, computers will use video games to learn how to function in the physical world. – Wired
  • Universe is the most intricate and intriguing tech I’ve worked on. – Greg Brockman

OpenAI, the billion-dollar San Francisco artificial intelligence lab backed by Tesla CEO Elon Musk, just unveiled a new virtual world.

This isn’t a digital playground for humans. It’s a school for artificial intelligence. It’s a place where AI can learn to do just about anything.

Other AI labs have built similar worlds where AI agents can learn on their own. Researchers at the University of Alberta offer the Atari Learning Environment, where agents can learn to play old Atari games like Breakout and Space Invaders.

Microsoft offers Malmo, based on the game Minecraft. And earlier this month, Google’s DeepMind released an environment called DeepMind Lab.

But Universe is bigger than any of these. It’s an AI training ground that spans any software running on any machine, from games to web browsers to protein folders.

“The domain we chose is everything that a human can do with a computer,” says Greg Brockman, OpenAI’s chief technology officer.

In theory, AI researchers can plug any application into Universe, which then provides a common way for AI “agents” to interact with these applications. That means researchers can build bots that learn to navigate one application and then another and then another. – Cade Metz


Wednesday 14th December 2016

Energy Will be Essentially Free in a Few Decades

Consequently all other material needs will be satisfied without cost: food, water, shelter, transportation, communication. Every individual will thus finally be able to live independently in material abundance

Power is all around us

And we have already begun to capture it

A couple of hour’s worth of sunlight energy falling on the earth is worth about a year of global energy consumption. In short, it’s enough, and it’s basically everywhere.

If we build enough solar cells, some of the energy collected could be used for the construction of an automatic cleaning and maintenance system for the energy infrastructure.

That would give us free electricity.


Yes, I’m aware of the challenges in terms of making new kinds of efficient and sustainable, “green” solar cells, as well as making enough of them and designing a fully automated maintenance system. Just give it time. 

There is at least 1000x the amount of solar energy falling on the Earth than we need. That’s a fact.

CAN we collect it? Well, plants can -and we have recently developed bionic leaves that are even more efficient. Do you think we will never improve from the current level, despite just having started? The energy is there. There is nothing in the laws of physics that says we can’t make use of a fraction of the sun’s energy for our own purposes.

There will be robots

-A myriad of robots of all sizes, taking care of us and each other

Then use the surplus energy created to power some robots to add more solar cells and build more robots. Voilà, we essentially have created a free labor force of solar powered robots.

Sure, we’ll need better robots, autonomous vehicles, robots being able to build robots, better solar cells etc., but we are getting there.

No matter what the elite wants, it won’t take long until everybody has his own solar cells and robots, or access to a pool of such resources. When? I don’t know, but very probably within half a century.

Once energy and labor are free, so will:

Water (large scale desalination is only a matter of energy input),

Food (robot-tended vertical farms with artificial light [reverse solar cells]),

Shelter (solar powered robots can collect any material and build/3D-print any type of structure according to open source specifications on the internet (10 houses in 24 hours),

Transportation (vehicles are robots and thus free as shown above; cars, planes, ships and roads will be powered, built, driven and maintained automatically in much the same way as everything else)

Communication (the easiest task of all in the scenario of free energy).

The only urgent challenge remaining will be death, and its cousin, disease.

The good thing is, with everything else free; every intelligent man, woman and child will be free to think and collaborate in order to develop the technologies necessary to prevent aging and illness. Scientists are already chipping away at the longevity problem piece by piece. – Mikael Syding


Saturday 24th December 2016

Huge Improvements as Google Translate Converts to AI Based System

  • The A.I. system made overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime.
  • The rollout includes translations between English and Spanish, French, Portuguese, German, Chinese, Japanese, Korean and Turkish. The rest of Translate’s hundred-odd languages are to come, with the aim of eight per month.
  • The Google Translate team had been steadily adding new languages and features to the old system, but gains in quality over the last four years had slowed considerably.

Image result for google brain

Late one Friday night in early November, Jun Rekimoto, a distinguished professor of human-computer interaction at the University of Tokyo, was online preparing for a lecture when he began to notice some peculiar posts rolling in on social media.

Apparently Google Translate, the company’s popular machine-translation service, had suddenly and almost immeasurably improved. Rekimoto visited Translate himself and began to experiment with it. He was astonished. He had to go to sleep, but Translate refused to relax its grip on his imagination.

Rekimoto promoted his discovery to his hundred thousand or so followers on Twitter, and over the next few hours thousands of people broadcast their own experiments with the machine-translation service.

As dawn broke over Tokyo, Google Translate was the No. 1 trend on Japanese Twitter, just above some cult anime series and the long-awaited new single from a girl-idol supergroup. Everybody wondered: How had Google Translate become so uncannily artful?

Image result for google translate

Google Translate’s side-by-side experiment to compare the new system with the old one. 

Schuster wanted to run a for English-French, but Hughes advised him to try something else. “English-French,” he said, “is so good that the improvement won’t be obvious.”

It was a challenge Schuster couldn’t resist. The benchmark metric to evaluate machine translation is called a BLEU score, which compares a machine translation with an average of many reliable human translations.

At the time, the best BLEU scores for English-French were in the high 20s. An improvement of one point was considered very good; an improvement of two was considered outstanding.

The neural system, on the English-French language pair, showed an improvement over the old system of seven points. Hughes told Schuster’s team they hadn’t had even half as strong an improvement in their own system in the last four years.

To be sure this wasn’t some fluke in the metric, they also turned to their pool of human contractors to do a side-by-side comparison. The user-perception scores, in which sample sentences were graded from zero to six, showed an average improvement of 0.4 — roughly equivalent to the aggregate gains of the old system over its entire lifetime of development.

In mid-March, Hughes sent his team an email. All projects on the old system were to be suspended immediately.

Gideon Lewis-Kraus, The Great AI Awakening


Saturday 24th December 2016


  • Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. – Vernor Vinge, 1993

Image result for weeklyglobalresearch

Integrating Artificial Intelligence 

We can enhance our brains to multiply our intelligence and creativity.

We’ve been improving our memory capacity and our speed of computation for thousands of years already with the invention of things like writing, writing implements – just language itself which enables more than one person to work on the same problem and to coordinate their understanding of it with each other. That also allows an increase in speed compared with what an unaided human would be able to do.

Currently we use computers and in the future we can use computer implants and so on. In principle it’s easy – it doesn’t violate any law of physics.  – David Deutsch


Saturday 24th December 2016

With A ‘Neural Lace’ Brain Implant, We Can Stay As Smart As AI

  • Can we just inject electronic circuits through a needle into the brain, or other tissue, and then connect it, and then monitor? Yes, we can, and that’s where we are today. – Charles Lieber

This summer at Code Conference 2016, Elon Musk stated publicly that given the current rate of A.I. advancement, humans could ultimately expect to be left behind—cognitively, intellectually—“by a lot.”

His solution to this unappealing fate is a novel brain-computer interface similar to the implantable “neural lace” described by the Scottish novelist Iain M. Banks in Look to Windward, part of his “Culture series” books. Along with serving as a rite of passage, it upgrades the human brain to be more competitive against A.I.’s with human-level or higher intelligence.

Smarter artificial intelligence is certainly being developed, but how far along are we on producing a neural lace?

At the conference, Musk said he didn’t know of any company that was working on one. But last year, a team of researchers led by Charles Lieber, the Mark Hyman Professor of Chemistry at Harvard University, described in Nature Nanotechnology a lace-like electronic mesh that “you could literally inject” into three-dimensional synthetic and biological structures like the brain. That was a major step.

His team’s paper, published on August 29th in Nature Methods, expands on that earlier work, to show that mesh-brain implants readily integrate into a mouse brain and enable neuronal recordings for at least eight months. “

In science, I’ve been disappointed at times, and this is a case where we’ve been more than pleasantly surprised,” Lieber says.

What does this development really mean for those of us who hope to acquire a neural lace? – Kiki Sanford

The Neural Lace Maker

At the outset no one, and a lot of reviewers of that first paper, believed we could even inject electronics through a needle and then not destroy the electronics. A lot of it was actually not related to anything biological. It was really about the materials science, and also showing that you could literally inject this into other kinds of structures.

Also, other implanted electronics in the brain always cause some type of immune response and damage, probably due to the combination of putting something really rigid into this soft tissue: Whenever you move around and your brain moves, it moves different than this thing. It can destroy cells; but also, because it’s much bigger, it’s apparently easier for the cells or the biological system to recognize it as something foreign and try to attack it.

But our philosophy, it seems, is going to be really rewarding because it solves the immune-response problem, and then allows us now to do measurements and modulate neural circuits.

It’s turned out to work much, much better than we originally thought, and some of the reasons are outlined in our original paper a year ago, and then much more so in this paper: That this mesh-like structure, which can be injected because it has size, scale, and mechanical properties very similar to the neural network, or neural tissue, turns out to have no immune response, which is unheard of. – Charles Lieber


Saturday 24th December 2016

Syringe-Injectable Electronics

Image result for charles lieber

The first thing we did was to create the first three-dimensional transistor: Three dimensional in a sense that the nanoscale device was completely removed from the substrate, and could then be placed inside of a cell.

The idea was to get things away from the substrate and into three-dimensional free space so that they could be integrated throughout tissue. This showed that we could actually put a fundamental building block of the computer industry inside of, and communicate with, the cell for the first time.

The brain grows literally throughout the neural lace. When it’s injected, this two-dimensional mesh ends up being like a cylinder that’s still a mesh, and it gets filled with the tissue.

In some process, we don’t understand all the details, there’s obviously some regrowth, and some remodeling of the tissue refills this space where the needle initially moved all the tissue out of the way. Then you’re left with something where it’s interpenetrating between this roughly cylindrical structure of the mesh.

You could envision co-injecting this network, the mesh or lace, with stem cells and literally regrowing damaged tissue. Using some stimulation and stuff, you could help to rewire this in the way you want—somewhat science fiction, but also not totally crazy. It’s certainly in the realm of what’s physically possible.

Our interest is to do things for the benefit of humankind, and maybe I sound like an idealist. I think our goal is to do something, and I think it’s possible to, number one, correct deficiencies. And I wouldn’t mind adding a terabyte of memory. – Charles Lieber


Saturday 24th December 2016


  • In a medical first, brain implant allows paralyzed man to feel again

Brain implants that can command artificial limbs to work represent a revolutionary advance.

By creating a direct line of communication from the brain to the prosthetic device, neurally-controlled chips not only restore functionality, but also recreate the sensory experience of the lost limbs. – The Aspen Institute



Saturday 24th December 2016

Keeping up With AI by Putting a Computer in Your Brain

Image result for kernel venice beach

  • Kernel is a human intelligence company developing the world’s first neuroprosthesis to mimic, repair and improve cognition.

Like many in Silicon Valley, technology entrepreneur Bryan Johnson sees a future in which intelligent machines can do things like drive cars on their own and anticipate our needs before we ask.

What’s uncommon is how Johnson wants to respond: find a way to supercharge the human brain so that we can keep up with the machines.

From an unassuming office in Venice Beach, his science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain to help people suffering from neurological damage caused by strokes, Alzheimer’s or concussions.

Top neuroscientists who are building the chip — they call it a neuroprosthetic — hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks.

The medical device is years in the making, Johnson acknowledges, but he can afford the time. He sold his payments company, Braintree, to PayPal for $800 million in 2013.

Kernel is cognitive enhancement of the not-gimmicky variety. The concept is based on the work of Theodore Berger, a pioneering biomedical engineer who directs the Center for Neural Engineering at the University of Southern California, and is the start-up’s chief science officer.

For over two decades, Berger has been working on building a neuroprosthetic to help people with dementia, strokes, concussions, brain injuries and Alzheimer’s disease, which afflicts 1 in 9 adults over 65.

In separate studies funded by the Defense Advanced Research Projects Agency over the last several years, Berger’s chips were shown to improve recall functions in both rats and monkeys.

A year ago, Berger felt he had reached a ceiling in his research. He wanted to begin testing his devices with humans and was thinking about commercial opportunities when he got a cold call from Johnson in October 2015. For Johnson, the meeting was a culmination of a longtime obsession with intelligence and the brain.

Ten months later, the team is starting to sketch out prototypes of the device and is conducting tests with epilepsy patients in hospitals. They hope to start a clinical trial, but first they have to figure out how to make the device portable. (Right now, patients who use it are hooked up to a computer.)

Johnson recognizes that the notion of people walking around with chips implanted in their heads to make them smarter seems far-fetched, to put it mildly. He says the goal is to build a product that is widely affordable. – Elizabeth Dwoskin


Saturday 24th December 2016

Ray Kurzweil’s Prediction

  • In the early 2030s, we are going to send nanorobots into the brain (via capillaries) that will provide full immersion virtual reality from within the nervous system and will connect our neocortex to the cloud. Just like how we can wirelessly expand the power of our smartphones 10,000-fold in the cloud today, we’ll be able to expand our neocortex in the cloud. – Ray Kurzweil

The brain tech to merge humans and AI is already being developed.

In a recent Abundance 360 webinar, I interviewed Bryan Johnson, the founder of a new company called Kernel which he seeded with $100 million.

To quote Bryan, “It’s not about AI vs. humans. Rather, it’s about creating HI, or ‘Human Intelligence’: the merger of humans and AI.”

A few weeks ago, I asked Bryan about Ray’s prediction about whether we’d be able to begin having our neocortex in the cloud by the 2030s.

His response, “Oh, I think it will happen before that.”

Exciting times.

Peter Diamandis


Thursday 26th January 2017


Image result for google deepmind

Go World Champ Crushed by AI: “Not a Single Human Has Touched the Edge of the Truth of Go.”

A mysterious character named “Master” has swept through China, defeating many of the world’s top players in the ancient strategy game of Go.

Master played with inhuman speed, barely pausing to think. With a wide-eyed cartoon fox as an avatar, Master made moves that seemed foolish but inevitably led to victory this week over the world’s reigning Go champion, Ke Jie of China.

Master later revealed itself as an updated version of AlphaGo, an artificial-intelligence program designed by the DeepMind unit of Alphabet Inc.’s Google.

It was dramatic theater, and the latest sign that artificial intelligence is peerless in solving complex but defined problems. AI scientists predict computers will increasingly be able to search through thickets of alternatives to find patterns and solutions that elude the human mind.

Master’s arrival has shaken China’s human Go players.

“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”

Master’s record—60 wins, 0 losses over seven days ending Wednesday—led virtuoso Go player Gu Li to wonder what other conventional beliefs might be smashed by computers in the future. – Eva Dou and Olivia Geng


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: