1 Bitcoin = $1,575
Bitcoin Sets Record High Above $1,600 with $26 Billion Market Cap
Return on Investments Made On:
May 5th 2011: 413x
May 5th 2012: 330x
May 5th 2013: 16x
May 5th 2014: 4x
May 5th 2015: 7x
May 5th 2016: 4x
The Greatest Wealth Transfer in Human History
With the rise of Bitcoin and crypto, I believe we are witnessing the greatest wealth transfer ever seen in human history. – Leon Fu
The Mainstream is Still Oblivious
The mainstream bubble is a ways off, mostly just us nerds still playing in these markets. – Chris Burniske
Billionaire Says Bitcoin is “The Best Investment of my Life”
Billionaire investor Mike Novogratz is betting big on digital currencies like Bitcoin and Ether.
Bitcoin was worth under $500 a year ago. Back in 2013, Novogratz predicted Bitcoin’s value would soar. He remembers people laughing at him at the time.
“Ten percent of my net worth is in this space,” Novogratz said the former Goldman Sachs partner at the Harvard Business School Club of New York.
It’s the “best investment of my life”.
The First Investor in Snapchat Thinks Bitcoin Could Hit $500,000 by 2030
The cryptocurrency isn’t anywhere close to its potential, according to Jeremy Liew, the first investor in Snapchat, and Peter Smith, the CEO and cofounder of Blockchain:
* Bitcoin-based remittances will explode
Remittance transfers to foreign countries, have almost doubled over the past 15 years to 0.76% of gross world product, data from the World Bank shows.
“Expats sending money home have found in bitcoin an inexpensive alternative, and we assume that the percentage of bitcoin-based remittances will sharply increase with greater bitcoin awareness,” the two said.
* The average value of bitcoin held per user will hit $25,000
“As institutional investors invest in bitcoin, sophisticated investors trade bitcoin, and bitcoin-based ETFs proliferate, we think the average bitcoin value held will increase to around $25k per Bitcoin holder,”
* Bitcoin’s 2030 Market Cap will be $10 trillion
Price and user count will total $500,000 and 400 million, respectively.
The Open Source Movement
Lesson to the “BitCorns” of this world
#1 never ever bet against the open source movement.
#2 if you ever have doubts go back to #1
Altcoin Market Cap = $19.7 Billion
Total Digital Currency Market Cap = $44.7 Billion
Bitcoin Share of Total Market Cap = 57%
The New Frontier
Cryptocurrency is the new frontier. Hold a claim in the right location, and you have a chance at building a multi-generational estate. – Tuur Demeester
I predict tokens as an asset class will reach over $300 billion within 4 years. – Erik Voorhees
Dotcom Style Bubble in Crypto Emerging?
It took 7 years for the value of all cryptocurrency to hit $10 billion, another year to hit $20 billion and 3 more months to hit $40 billion. – Bruce Fenton
Ethereum Hits an All-Time High of $9 Billion
More than 500x Return Since July 2014
Billionaire Mike Novogratz is saying Bitcoin will go to $2,000.
But he also warned the Harvard Business School Club crowd that there will “likely be a bubble” in digital currencies. The best way to handle it, he argues, is the old Wall Street trick of diversification. Put a little money in a lot of different plays in digital currency.
For example, Novogratz was also an early investor in Ether. Novogratz says he bought Ether when it was trading for about $1. This week it reached an all-time high of $101.
Ether has a “smart contract” function that gives users abilities to transfer information in addition to monetary value.
We’re witnessing the “3rd inning” of this digital asset revolution, Novogratz predicts. He’s not exactly sure how it will play out, but he plans to continue investing in digital currencies. – Heather Long
Litecoin hits $1 billion market cap and quarter billion dollars in 24 hr exchange volume on news of CoinBase support. – Erik Voorhees
PRIVACY / SOUSVEILLANCE / SECURITY / INTERNET
Smartphone App Enables People to Depend on Each Other Instead of Police
Cell 411 puts responsibility back into your hands, away from monopolistic organizations.
Cell 411 allows you to create custom cells or groups of your friends, neighbors or family members and alert them whenever you need help; they will receive your exact location with turn-by-turn directions to come and assist you.
Whether you have a flat tire, you find yourself in danger or need medical assistance, you can leverage the power of large groups of trusted people to call for help and receive it.
We even support public groups that allow entire cities, companies or neighborhoods to join and collaborate on solving emergencies in real time.
You have the freedom to ask anyone for help, anywhere you go. With thousands of cells all over the world, communities, families and neighborhood watch groups have been using Cell 411 to keep criminals out, inform each other of emergencies and respond to each other’s needs.
Real-Time Response Management
Cell 411 can alert your friends, neighbors, and even emergency service providers when you are in danger, experiencing medical distress, or just need assistance.
Join, organize, and manage Cells of users all on your own, allowing you full control over what services and groups you want to have contact with.
Fully Customized Alerts
You can alert your friends when you need help or receive alerts when they need help, with directions to where you should go. Only people you specifically issue alerts to can see them.
Built By and For Activists
Police brutality, illegal searches and other government abuses can be broadcast out to your local cell with turn-by-turn directions to your location.
Over $331M has been Raised in Token Sales in the Past 12 Months
There are no live products in this group of 29 crowd sales.
Raising significant funding before any product is generally a bad idea. If you look at the history of startups, the vast majority of companies that have set expectations high with a lot of funding pre-product have failed.
I think there will be a lot of failure in this batch and expect to see better practices emerge for token-based crowd funding in the future.
The most impactful technologies are polarizing and make people uncomfortable. The boom in blockchain-based tokens in the past year has certainly made a lot of people uncomfortable.
On one side, you have people arguing against the use of blockchain-based tokens, citing the regulatory uncertainty and harm that they can cause consumers as a result of abuse from entrepreneurs.
And on the other side you have people claiming that ICOs are the future of venture capital and how entrepreneurs will get funded.
Tokens enable Internet tribes to emerge not in the form of traditional companies as we know them, but instead in a new type of organization called a decentralized autonomous organization (DAO).
A DAO is best described as a group of people bound together not by a legal entity and formal contracts, but instead by cryptographic tokens (incentives) and fully transparent rules that are written into the software.
- Usage tokens: A token that is required to use a service
- Work tokens: A token that gives users the right to contribute work to a DAO and earn in exchange for their work
If you’re building a network-based Internet product, forming a decentralized autonomous organization, implementing a blockchain-based token into the product and structuring the token as a usage or work token is likely to be a winning business model.
Tokens are here to stay because of their ability to enhance Internet products and create intensely passionate tribes via network ownership effects.
It’s likely bad projects will continue to get funded because of the permissionless nature of blockchains. But it’s important not to throw the baby out with the bathwater — it’s also likely that the breakout consumer Internet products of the next 20 years will be token based. – Nick Tomaino
It would be interesting to make a list of VCs who’ve done better than if they’d just bought Amazon. Bet it would be short. – Paul Graham
These are at all-time high this week:
COMPANIES / PROJECTS / PRODUCTS
Price Declines in 3D Printers Coming Out of China are Astounding
$210 – Aluminum frame, wifi, heated print bed
Open Source for Drones – PX4 Pro Open Source Autopilot
Comparison of leading drone software shows open source Dronecode PX4 stack rivals best DJI proprietary autopilots: Comparing Precision of Autopilots for Survey Missions – PX4
The First Decade of Augmented Reality
As 2006 was for smartphones so 2017 is for augmented reality – everything is just about to happen, but what?
There was a point at which the first demos started appearing in public (2006), a point at which the first really viable consumer product appeared in the iPhone (2007), and then, several years later, a point at which sales really started exploding, as the iPhone evolved and Android followed it.
You can see some of that lag in the chart below – it took several years after the 2007 launch of the iPhone for sales to take off (even after the pricing model changed).
Most revolutionary technologies emerge like this, in stages – it’s rare for anything to spring into life fully formed.
Today, I think augmented reality* is somewhere between points 2006 and 2007 – we’ve seen some great demos and the first prototypes and we don’t have a mass-market commercial product, but we’re close.
Microsoft is shipping the Hololens: this has really good positional tracking, has integrated the computing into the headset, though at the cost of making it bulky, has a very small field of view (much less than is suggested in Microsoft’s marketing videos), and costs $3000. A second version is planned for, apparently, 2019.
Magic Leap (an a16z investment) is working on its own, wearable technology. A while ago I said that seeing Magic Leap was the coolest thing I’d seen since the iPhone. It’s now much cooler than that.
We’re not far away from the iPhone 1 stage, and then, over the next decade or so, we could evolve to a truly mass-market product.
How many people will have one of these?
Will every small town in Brazil and Indonesia have shops selling dozens of different $50 Chinese AR glasses where today they sell Androids? – Benedict Evans
What Companies Are Winning the Race for Artificial Intelligence?
My response contains some bias, because I work at Google Brain and I really like it there. My opinions are my own, and I do not speak for the rest of my colleagues or Alphabet as a whole.
I rank “leaders in AI research” among tech companies as follows:
1 – Google
I would say Google’s Deepmind is probably #1 right now, in terms of AI research.
Their publications are highly respected within the research community, and span a myriad of topics such as Deep Reinforcement Learning, Bayesian Neural Nets, Robotics, transfer learning, and others.
Being London-based, they recruit heavily from Oxford and Cambridge, which are great ML feeder programs in Europe.
Google Brain has many employees that focus on long-term AI research in every AI subfield imaginable, similar to Facebook AI Research (FAIR) and Deepmind.
2 – Facebook
FAIR’s papers are good and my impression is that a big focus for them is language-domain problems like question answering, dynamic memory, Turing-test-type stuff.
Occasionally there are some statistical-physics-meets-deep-learning papers. Obviously they do computer vision type work as well.
3 – OpenAI
OpenAI has an all-star list of employees. Despite being a small group of ~50 people (so I guess not a “Big Player” by headcount or financial resources), they also have a top-notch engineering team and publish top-notch, really thoughtful research tools like Gym and Universe.
They’re adding a lot of value to the broader research community by providing software that was once locked up inside big tech companies. This has added a lot of pressure on other groups to start open-sourcing their codes and tools as well.
I almost ranked them as #1, on par with Deepmind in terms of top-research talent, but they haven’t really been around long enough for me to confidently assert this.
They also haven’t pulled off an achievement comparable to AlphaGo yet, though I can’t overstate how important Gym / Universe are to the research community.
4 – Baidu
Baidu SVAIL and Baidu Institute of Deep Learning are excellent places to do research, and they are working on a lot of promising technologies like home assistants, aids for the blind, and self-driving cars.
They are definitely the strongest player in AI in China.
5 – Microsoft
Before the Deep Learning revolution, Microsoft Research used to be the most prestigious place to go.
They hire very experienced faculty with many years of experience, which might explain why they sort of missed out on Deep Learning (the revolution in Deep Learning has largely been driven by PhD students).
6 – Apple
Apple is really struggling to hire deep learning talent, as researchers tend to want to publish and do research, which goes against Apple’s culture as a product company.
This typically doesn’t attract those who want to solve general AI or have their work published and acknowledged by the research community.
I think Apple’s design roots have a lot of parallels to research, especially when it comes to audacious creativity, but the constraints of shipping an “insanely great” product can be a hindrance to long-term basic science.
7 – IBM
I know a former IBM employee who worked on Watson and describes IBM’s “cognitive computing efforts” as a total disaster, driven from management that has no idea what ML can or cannot do but sell the buzzword anyway.
Watson uses Deep Learning for image understanding, but as I understand it the rest of the information retrieval system doesn’t really leverage modern advances in Deep Learning.
Basically there is a huge secondary market for startups to capture applied ML opportunities whenever IBM fumbles and drops the ball. No offense to IBM researchers; you’re far better scientists than I ever will be. My gripe is that the corporate culture at IBM is not conducive to leading AI research.
How Aristotle Created the Computer
Mathematical logic provided the foundation for a field that has had more impact on the modern world than any other – How Aristotle Created the Computer by Chris Dixon
Google To Prove It Has A Quantum Computer In A Few Months
The search giant plans to reach a milestone in computing history before the year is out.
Soon we will perform computations which current computers cannot replicate. We are particularly interested in applying quantum computing to artificial intelligence and machine learning.
A small quantum computer could perform more computations simultaneously than could be performed by the entire visible universe if it was all made into classical computers. In fact when I say “more” that’s an understatement. It’s exponentially more.
Google’s quantum chip ready for testing
John Martinis has given himself just a few months to reach a milestone in the history of computing.
By the end of this year, Martinis says, his team will build a device that achieves “quantum supremacy,” meaning it can perform a particular calculation that’s beyond the reach of any conventional computer.
Proof will come from a kind of drag race between Google’s chip and one of the world’s largest supercomputers.
“We think we’re ready to do this experiment. It’s something we can do now,” says Martinis.
49 qubits needed for quantum supremacy
Researchers have so far demonstrated quantum computing with only small groups of qubits. Google has released results from a chip that has nine qubits arranged in a line, but Martinis says he’ll need a grid of 49 qubits for his quantum supremacy experiment.
Google’s latest chip has only six qubits, but they are arranged in a two-by-three configuration that Martinis says shows the company’s technology still works when qubits are nestled side by side, as they will be in larger devices.
“Now we’re ready to kind of move fast.” Designs for devices with 30 to 50 qubits are already in progress, he says.
Martinis says that the experiment could become a benchmark for anyone claiming to have a working quantum computer.
He also says the target has helped managers at Google, and the company’s cofounder Sergey Brin, appreciate that the technology is becoming real.
“They all get it and are very excited about it,” says Martinis.- Tom Simonite
Neuralink and the Brain’s Magical Future
A “whole-brain interface” that feels as much a part of you as your cortex and limbic system
Six weeks after first learning about Elon Musk’s new venture, Neuralink, I’m convinced that it somehow manages to eclipse Tesla and SpaceX in both the boldness of its engineering undertaking and the grandeur of its mission.
The other two companies aim to redefine what future humans will do—Neuralink wants to redefine what future humans will be.
The mind-bending bigness of Neuralink’s mission, combined with the labyrinth of impossible complexity that is the human brain, made this the hardest set of concepts yet to fully wrap my head around—but it also made it the most exhilarating when, with enough time spent zoomed on both ends, it all finally clicked. I feel like I took a time machine to the future, and I’m here to tell you that it’s even weirder than we expect.
The budding industry of brain-machine interfaces is the seed of a revolution that will change just about everything. But in many ways, the brain-interface future isn’t really a new thing that’s happening. If you take a step back, it looks more like the next big chapter in a trend that’s been going on for a long time.
Language took forever to turn into writing, which then took forever to turn into printing, and that’s where things were when George Washington was around.
Then came electricity and the pace picked up. Telephone. Radio. Television. Computers. And just like that, everyone’s homes became magical.
Then phones became cordless. Then mobile. Computers went from being devices for work and games to windows into a digital world we all became a part of.
Then phones and computers merged into an everything device that brought the magic out of our homes and put it into our hands. And on our wrists.
We’re now in the early stages of a virtual and augmented reality revolution that will wrap the magic around our eyes and ears and bring our whole being into the digital world.
You don’t need to be a futurist to see where this is going.
Magic has worked its way from industrial facilities to our homes to our hands and soon it’ll be around our heads. And then it’ll take the next natural step. The magic is heading into our brains.
It will happen by way of a “whole-brain interface,” —a brain interface so complete, so smooth, so biocompatible, and so high-bandwidth that it feels as much a part of you as your cortex and limbic system.
A whole-brain interface gives your brain the ability to communicate wirelessly with the cloud, with computers, and with the brains of anyone who has a similar interface in their head. This flow of information between your brain and the outside world would be so effortless, it would feel similar to the thinking that goes on in your head today.
Current Brain Machine Interfaces
In 1969, a researcher named Eberhard Fetz connected a single neuron in a monkey’s brain to a dial in front of the monkey’s face. The dial would move when the neuron was fired.
When the monkey would think in a way that fired the neuron and the dial would move, he’d get a banana-flavored pellet. Over time, the monkey started getting better at the game because he wanted more delicious pellets. The monkey had learned to make the neuron fire and inadvertently became the subject of the first real brain-machine interface.
Progress was slow over the next few decades, but by the mid-90s, things had started to move, and it’s been quietly accelerating ever since.
The major BMI industries of the future that will give all humans magical superpowers and transform the world are in their fetal stage right now—and we should look at what’s being worked on as a set of clues about what the mind-boggling worlds of 2040 and 2060 and 2100 might be like.
Early BMI type #1: Using the motor cortex as a remote control
This actually works. Through the work of motor-cortex-BMI pioneer company BrainGate, here’s a guy playing a video game using only his mind.
Want to pick up a mug of coffee and take a sip? That’s what this quadriplegic woman did:
In these developments are the seeds of other future breakthrough technologies—like brain-to-brain communication.
Early BMI type #2: Artificial ears and eyes
Giving sound to the deaf and sight to the blind is among the more manageable BMI categories.
On the ears side of things, recent decades have seen the development of the groundbreaking cochlear implant.
A cochlear implant is a little computer that has a microphone coming out of one end (which sits on the ear) and a wire coming out of the other that connects to an array of electrodes that line the cochlea.
An artificial ear, performing the same sound-to-impulses-to-auditory-nerve function the ear does. Today’s cochlear implant allows deaf people to hear speech and have conversations, which is a groundbreaking development.
Many parents of deaf babies are now having a cochlear implant put in when the baby’s about one year old.
There’s a similar revolution underway in the world of blindness, in the form of the retinal implant.
A more complicated interface than the cochlear implant, the first retinal implant was approved by the FDA in 2011—the Argus II implant, made by Second Sight. The retinal implant looks like this:
And it works like this:
Early BMI type #3: Deep Brain Stimulation
What happens here is one or two electrode wires, usually with four separate electrode sites, are inserted into the brain, often ending up somewhere in the limbic system.
Then a little pacemaker computer is implanted in the upper chest and wired to the electrodes.
The electrodes can then give a little zap when called for, which can do a variety of important things. Like:
- Reduce the tremors of people with Parkinson’s Disease
- Reduce the severity of seizures
- Chill people with OCD out
It’s also experimentally (not yet FDA approved) been able to mitigate certain kinds of chronic pain like migraines or phantom limb pain, treat anxiety or depression or PTSD, or even be combined with muscle stimulation elsewhere in the body to restore and retrain circuits that were broken down from stroke or a neurological disease.
This is the state of the early BMI industry, and it’s the moment when Elon Musk is stepping into it. For him, and for Neuralink, today’s BMI industry is Point A.
Neuralink’s First Product
The business side of Neuralink is a brain-machine interface development company.
They want to create cutting-edge BMIs—what one of them referred to as “micron-sized devices.”
Doing this will support the growth of the company while also providing a perfect vehicle for putting their innovations to use (kind of the way SpaceX uses their launches both to sustain the company and experiment with their newest engineering developments).
As for what kind of interface they’re planning to work on first, here’s what Elon said:
We are aiming to bring something to market that helps with certain severe brain injuries (stroke, cancer lesion, congenital) in about four years.
We Don’t Need to Understand the Brain to Make Engineering Progress
If it were a prerequisite to understand the brain in order to interact with the brain in a substantive way, we’d have trouble.
But it’s possible to decode all of those things in the brain without truly understanding the dynamics of the computation in the brain.
Being able to read it out is an engineering problem. Being able to understand its origin and the organization of the neurons in fine detail in a way that would satisfy a neuroscientist to the core—that’s a separate problem. And we don’t need to solve all of those scientific problems in order to make progress.
– Neuralink co-founder Flip Sabes
If we can just use engineering to get neurons to talk to computers, we’ll have done our job, and machine learning can do much of the rest. Which then, ironically, will teach us about the brain. As Flip points out:
The flip side of saying, “We don’t need to understand the brain to make engineering progress,” is that making engineering progress will almost certainly advance our scientific knowledge—kind of like the way Alpha Go ended up teaching the world’s best players better strategies for the game.
Then this scientific progress can lead to more engineering progress—the engineering and the science are gonna ratchet each other up here.
Neuralink’s Challenge: Making Stevenson’s Law Look More Like Moore’s Law
- We are currently able to simultaneously record about 500 neurons at once
- 100,000 simultaneously recorded neurons is a number that would allow for the creation of a wide range of incredibly useful BMIs with a variety of applications.
- 1 million simultaneously recorded neurons is an interface that could really change the world.
- The number of neurons we can simultaneously record seems to consistently double every 7.4 years. If that rate continues, it’ll take us till the end of this century to reach a million.
- If we double our total every 18 months, like we do with computer transistors, we’ll get to a million in the year 2034.
Neuralink’s hurdles are technology hurdles. And there are many—but there is a challenge that stands out as the largest— A challenge that if conquered, may be enough to trigger all the other hurdles to fall and totally change the trajectory of our future.
There have never been more than a couple hundred electrodes in a human brain at once. When it comes to vision, that equals a super low-res image. When it comes to motor, that limits the possibilities to simple commands with little control. When it comes to your thoughts, a few hundred electrodes won’t be enough to communicate more than the simplest spelled-out message.
We need higher bandwidth if this is gonna become a big thing. Way higher bandwidth.
The Neuralink team threw out the number “one million simultaneously recorded neurons” when talking about an interface that could really change the world.
I’ve also heard 100,000 as a number that would allow for the creation of a wide range of incredibly useful BMIs with a variety of applications.
Early computers had a similar problem. Primitive transistors took up a lot of space and didn’t scale easily.
Then in 1959 came the integrated circuit—the computer chip. Now there was a way to scale the number of transistors in a computer, and Moore’s Law—the concept that the number of transistors that can fit onto a computer chip doubles every 18 months—was born.
Until the 90s, electrodes for BMIs were all made by hand. Then we started figuring out how to manufacture those little 100-electrode multielectrode arrays using conventional semiconductor technologies.
Neuralink co-founder Ben Rapoport believes that “the move from hand manufacturing to Utah Array electrodes was the first hint that BMIs were entering a realm where Moore’s Law could become relevant.”
This is everything for the industry’s potential. Our maximum today is a couple hundred electrodes able to measure about 500 neurons at once—which is either super far from a million or really close, depending on the kind of growth pattern we’re in.
If we add 500 more neurons to our maximum every 18 months, we’ll get to a million in the year 5017.
If we double our total every 18 months, like we do with computer transistors, we’ll get to a million in the year 2034.
Currently, we seem to be somewhere in between. Ian Stevenson and Konrad Kording published a paper that looked at the maximum number of neurons that could be simultaneously recorded at various points throughout the last 50 years (in any animal), and put the results on this graph:
Sometimes called Stevenson’s Law, this research suggests that the number of neurons we can simultaneously record seems to consistently double every 7.4 years.
If that rate continues, it’ll take us till the end of this century to reach a million.
Whatever the equivalent of the integrated circuit is for BMIs isn’t here yet, because 7.4 years is too big a number to start a revolution.
The breakthrough here isn’t the device that can record a million neurons—it’s the paradigm shift that makes the future of that graph look more like Moore’s Law and less like Stevenson’s Law. Once that happens, a million neurons will follow.
We are Already Digitally Superhuman
We already have chips in the brain. We have deep brain stimulation to alleviate the symptoms of Parkinson’s Disease, we have early trials of chips to restore vision, we have the cochlear implant—so to us it doesn’t seem like that big of a stretch to put devices into a brain to read information out and to read information back in.
– Neuralink co-founder Flip Sabes
Elon calls the whole-brain interface and its many capabilities a “digital tertiary layer,” a term which has two levels of meaning.
When Elon refers to a “digital tertiary layer,” he’s considering our existing brain having two layers—our animal limbic system (which could be called our primary layer) and our advanced cortex (which could be called our secondary layer).
The interface, then, would be our tertiary layer—a new physical brain part to complement the other two.
If thinking about this concept is giving you the willies, Elon has news for you:
We already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications.
You can ask a question via Google and get an answer instantly. You can access any book or any music.
With a spreadsheet, you can do incredible calculations. If you had an Empire State building filled with people—even if they had calculators, let alone if they had to do it with a pencil and paper—one person with a laptop could outdo the Empire State Building filled with people with calculators.
You can video chat with someone in freaking Timbuktu for free.
This would’ve gotten you burnt for witchcraft in the old days.
You can record as much video with sound as you want, take a zillion pictures, have them tagged with who they are and when it took place. You can broadcast communications through social media to millions of people simultaneously for free. These are incredible superpowers that the President of the United States didn’t have twenty years ago.
The thing that people, I think, don’t appreciate right now is that they are already a cyborg. You’re already a different creature than you would have been twenty years ago, or even ten years ago. You’re already a different creature.
You can see this when they do surveys of like, “how long do you want to be away from your phone?” and—particularly if you’re a teenager or in your 20s—even a day hurts. If you leave your phone behind, it’s like missing limb syndrome.
I think people—they’re already kind of merged with their phone and their laptop and their applications and everything.
This is a hard point to really absorb, because we don’t feel like cyborgs. We feel like humans who use devices to do things.
But think about your digital self—you when you’re interacting with someone on the internet or over FaceTime or when you’re in a YouTube video. Digital you is fully you—as much as in-person you is you—right?
The only difference is that you’re not there in person—you’re using magic powers to send yourself to somewhere far away, at light speed, through wires and satellites and electromagnetic waves. The difference is the medium.
In that sense, your phone is as much “you” as your vocal cords or your ears or your eyes. All of these things are simply tools to move thoughts from brain to brain—so who cares if the tool is held in your hand, your throat, or your eye sockets? The digital age has made us a dual entity—a physical creature who interacts with its physical environment using its biological parts and a digital creature whose digital devices—whose digital parts—allow it to interact with the digital world.
But because we don’t think of it like that, we’d consider someone with a phone in their head or throat a cyborg and someone else with a phone in their hand, pressed up against their head, not a cyborg. Elon’s point is that the thing that makes a cyborg a cyborg is their capabilities—not from which side of the skull those capabilities are generated.
We’re already a cyborg, we already have superpowers, and we already spend a huge part of our lives in the digital world. And when you think of it like that, you realize how obvious it is to want to upgrade the medium that connects us to that world. This is the change Elon believes is actually happening when the magic goes into our brains:
You’re already digitally superhuman. The thing that would change is the interface—having a high-bandwidth interface to your digital enhancements.
The thing is that today, the interface all necks down to this tiny straw, which is, particularly in terms of output, it’s like poking things with your meat sticks, or using words—either speaking or tapping things with fingers.
And in fact, output has gone backwards. It used to be, in your most frequent form, output would be ten-finger typing. Now, it’s like, two-thumb typing.
That’s crazy slow communication. We should be able to improve that by many orders of magnitude with a direct neural interface.
In other words, putting our technology into our brains isn’t about whether it’s good or bad to become cyborgs. It’s that we are cyborgs and we will continue to be cyborgs—so it probably makes sense to upgrade ourselves from primitive, low-bandwidth cyborgs to modern, high-bandwidth cyborgs.
A whole-brain interface is that upgrade. It changes us from creatures whose primary and secondary layers live inside their heads and whose tertiary layer lives in their pocket, in their hand, or on their desk, to creatures whose three layers all live together.
I’ll guess that right now, some part of you believes this insane world could really maybe be the future—and another part of you refuses to believe it. I’ve got a little of both of those going on too.
The concept of being blown away by the future speaks to the magic of our collective intelligence—but it also speaks to the naivety of our intuition. Our minds evolved in a time when progress moved at a snail’s pace, so that’s what our hardware is calibrated to. And if we don’t actively override our intuition—the part of us that reads about a future this outlandish and refuses to believe it’s possible—we’re living in denial.
The reality is that we’re whizzing down a very intense road to a very intense place, and no one knows what it’ll be like when we get there. A lot of people find it scary to think about, but I think it’s exciting.
Follow me on Twitter @leebanfield1