DRONE-BG.jpg
 

Machines Will Win

 
 
 

The future is filled with disruption. But, the pending disruptions are taking on new forms. The relationship between people and machines is changing forever and our expectations for how the world will evolve are changing too.

- Gartner

 

Bee Partners is a pre-Seed venture capital firm based in San Francisco, California. We are small, with only four principals; but due to our location, background and experience, we see a very large number of early-stage ideas. We are exposed on an ongoing basis to ventures that excite for their ambition and luminosity, and to Founders who impress with their talent and drive. The challenge, as it is for all VC firms, large or small, is to be brutally selective. We are fixated on the future, and always keenly aware of what characteristics compel us to invest in companies on behalf of those who entrust us with their money.

Pre-Seed investing is arguably the stage at which investors like us see the very widest array of innovation, at its earliest inception, often from teams with ideas coming straight from the labs of Silicon Valley and its elite universities. This affords us insight into innovation, in real time, across the tech landscape—inception-stage investors are “closest to the metal,” as some would say. Because of this, we walk a fine line between immersing ourselves in what is fascinating about the “now” and our responsibility to carefully consider the future.

And as investors, we can’t look clear-eyed at the future unless we look squarely at one essential truth: The machines of technology are evolving faster than humans. We live in the age of technology and there is no turning back. The future we make, the problems we solve, the opportunities we create will derive less from our ability to harness machines than from trusting the newfound abilities of the machines we create.

In short, as we invest in the future, we have concluded: Machines Will Win. We will partner with them; we will build trust within them. We believe this collaboration with machines will create a better future, a future we all get to live in.

We come to this conclusion not without some trepidation. Machine “intelligence” triggers a binary response: fear of a dystopian future on the one hand, or general optimism on the other. After intense internal debate, we have come down firmly on the side of pragmatic optimism. And we have done so largely as a result of recognizing that we—precisely as early-stage investors—have not just a role, but a responsibility to help steer the trajectory of technology in a positive direction.

 

We shape our tools, and thereafter our tools shape us.

- John M. Culkin, A Schoolman's Guide to Marshall McLuhan"

 


But hasn’t this always been so? What is it about the evolution of technology and society that makes right now any different than any other time? That is the point we hope to make in this paper: By seeking historical context and by contrasting human vs. machine evolution, we determine that the current moment presents a window of opportunity—an opportunity to define the standards upon which humans and machines will collaboratively evolve. This collaborative evolution offers, in fact, an unprecedented favorable circumstance for Founders and investors, as well as meaningful impact from a global social perspective. It requires that we continually challenge incumbencies, commit to a willingness to learn, and hold an unwavering trust that this future where machines win is one in which humanity also thrives.

 This paper explains the thoughts that guide our internal calculus on a daily basis, and it signals to our Founders, investors, and peers where our motivations lie. It expresses a conviction we believe to our bones. In short, we’re betting the firm on this.  




Human Evolution: Interaction, Language, Books & Value

 
 

Knowledge, when shared, becomes like a grand, collective intergenerational collaboration.

- Tim Urban. “Neuralink and the Brain’s Magical Future (G-Rated Version)”"

 

In contrast to the rapid, accelerating evolution of machines, human evolution has progressed at a glacial pace. The most transformative development in human ability was the formation of the neocortex (circa 200 million years ago) when evolutionary magic occurred. First, humans were able to produce and sustain a seemingly limitless universe of complex internal thoughts. Then, they were able to express those thoughts through symbolic sounds reverberating through the air to other human brains that could absorb, interpret, and understand as intended. Human-to-human interaction, via language, was born.

The emergence of language triggered the rise of human shared intelligence. From one generation to the next, the ability of language to distribute knowledge and learning more effectively distanced humans from other species. The invention of print, with its ability to effectively “store and forward” knowledge — even across generations — advanced this evolution even further.

The emergence of language triggered the rise of human shared intelligence. From one generation to the next, the ability of language to distribute knowledge and learning more effectively distanced humans from other species. The invention of print, with its ability to effectively “store and forward” knowledgeeven across generationsadvanced this evolution even further.

Language empowered humans to tell stories that propelled their very survival, allowing them to communicate to overcome daily threats. Beyond mere survival, language empowered humans to Innovate by building upon a growing wealth of tribal knowledge. Further, as the human population increased, so did the possible permutations of types and varieties of human conversations that challenged the limits of beliefs and ideas. As a direct result, innovation flourished exponentially. Just like Metcalfe’s law, wherein “the value of a telecommunications network is proportional to the square of the number of connected users of the system,” the value of our collective human network is proportional to the number of humans interacting on the planet. Clouds of CPUs
Given the "clouds of CPUs" that will augment humans, described later, we might more appropriately cite Reed's Law, which says that the utility of large networks, particularly social networks, can scale exponentially with the size of the network.
The printed word, and then mass-produced books, added another layer of abstraction and method of distribution that could take one human’s ideas and eventually share them with millions. Assuming that our innermost motivation is to procreate, then our secondary core motivation is to preserve our gene pool by increasing our collective knowledge and augmenting our abilities through invention. The proliferation of information and education democratized the ability for humans to create value, which led to the invention of value-creating machines through which humans could outsource human time and energy. We continue along this path.

Machine Evolution: Speed, Precision, Complexity & Data

 
 

Old King Coal was a merry old soul: “‘Tis fairly done,” quoth he, When he saw the myriad wheels at work, O’er all the land and sea. They spared the bones and strength of men, They hammer’d, wove, and spun; There was nought too great, too mean, or small, The giant Steam had power for all; His task was never done.

- Charles Mackay, From song, “Old King Coal” (1846), The Poetical Works of Charles Mackay

 


First of all, what do we mean by “machine?

In the first Industrial Revolution (1760s-1830s), the definition of that word was obvious: The spinning jenny (1764), with its metal frame and eight wooden spindles, dramatically reduced the human workforce needed to produce a simple thread, transforming not only the textile and agricultural industries, but also every derivative industry that touched them. An industrial version of the steam engine (1775) followed relatively quickly. This combination of mechanical innovation and machine power was the true catalyst for the change at that time: our first hint at semi-autonomous manufacturing and the rise of the factory system in the second Industrial Revolution (1860s-early 1900s). Human-to-machine interaction, first through levers and knobs, was born.

On the one hand, menial manual labor began to be dramatically reduced, and the standard of living for the general population improved. On the other, many manual workers were quickly obviated by these machines and found themselves facing uncertain transitions and roles. The Luddites (1811) became radicalized by the dramatic acceleration and disruption of this “first” machine era, a concern that we will see still reverberates today. Migration of labor through industrial/tech transitions is as important a topic now as it was then.

Electrification and transportation marked the predominant impacts on society through the 1950s and ‘60s, at which point we first encountered the modern computer. Up until this time, machines were tangibly machines. You could touch them, observe their operation, witness their output. There was nothing particularly mysterious at all about this world, even as mechanical complexity increased. note Disclaimer
Conveniently setting aside for this discussion innovations in chemistry, metallurgy, electronics for brevity sake.

You could actually take apart an IBM Selectric typewriter and understand how it worked.

This 200-year time span first demonstrated to us—in the last “seconds of our human evolution—the idea that our own innate human abilities might be superseded by machines in areas like speed, precision, and the ability to handle complexity.

The share of “work” over time is being divided between humans and machines — both physical and logical. Work that is perceived as repetitive, requiring high precision, highly complex, or dangerous will be increasingly delegated to machines. While work perceived as requiring a “human in the loop” (HITL), high creativity or intuition may remain in the human domain. There will be the need for human-machine collaboration for many generations to come.

The share of “work” over time is being divided between humans and machinesboth physical and logical. Work that is perceived as repetitive, requiring high precision, highly complex, or dangerous will be increasingly delegated to machines. While work perceived as requiring a “human in the loop” (HITL), high creativity or intuition may remain in the human domain. There will be the need for human-machine collaboration for many generations to come.


Enter the Logical Machine

The emergence of “computing” in the 1950s introduced the idea of the logical machine: a “physical” machine, but with negligible actual moving parts. note In other words
all the "action" happened in tubes and then silicon.
The “ghost in the machine” took raw input, just as did the spinning jenny. It performed some operation that could be logically deconstructed, and produced a refined output as result. The byproduct: essentially, data.

The iconic IBM 360 is viewed by many as the computing equivalent of the refined steam engine. Despite the fact that IBM’s own Thomas Watson claimed (in 1943) that “I think there is a world market for about five computers,” the IBM 360 was widely adopted due to its flexible architecture and ushered in the dizzying logical machine evolution we’ve experienced since.

In the mainframe era The Mainframe Era
has arguably now passed. While there are still plenty of mainframes out there, the rise of “the cloud” has largely stepped into the conceptual role mainframes once held in terms of centralized support of many users.
the operational protocol was “time-sharing.” One machine could support hundreds of simultaneous users with, in essence, the same CPU. With the 1981 launch of the personal computer, however, the exhilaration and freedom of one CPU per person was transformative. Bill Gates’ “information at your fingertips” in 1989 foreshadowed the immeasurable impact of the world wide web in the early 1990s. For roughly 20 years, one person / one CPU (in the form of a desktop or laptop) was the norm, and societally we settled into a relatively comfortable relationship with machines as tools for augmentation, with us still very firmly in control of the relationship.

But something’s different now. Machines, in their combined logical and physical makeup, have evolved to the point where we are in a time window of transformative coevolution that could, in impact, exceed that of even the great industrial and computing revolutions. With the rise of mobile devices, embedded systems, and IoT generally, we are suddenly evolving from one CPU per person, to (eventually) hundreds of CPUs dedicated to or shared by the average individual. We, both humans and machines, will literally be moving in a given day through “clouds of CPUs.”  And with the advent of distributed ledger technology, as represented in the current instance  by blockchain, we will overcome existing IoT challenges: high infrastructure and maintenance costs laden in centralized clouds and data centers and networking equipment; the risk to systems of a single point of failure, and the communication gaps still existent in spite of dramatically improved operating systems at scale. Blockchain enables machines to operate in direct communication without a “trusted” intermediary (human or machine): Machine-to-machine interaction has become reality.

Human evolution has been an exceedingly slow yet methodical process by our historical clock. Machine evolution, against that same clock, has been exceedingly fast—and is arguably increasing in speed. How should we interpret this? In essence, machines have in many respects already exceeded human capabilities, as we saw as early as the industrial revolutions, in terms of speed, precision, and fortitude; and this trend has only continued. With the technology explosion of the past 60 years, and the corresponding by-product of unimaginable amounts of data, we’ve entered an era where those attributes we’ve considered most human—vision, touch, memory, even cognition—will definitely be challenged and/or technically exceeded by machines. Machines will, in fact, win. And based on their evolutionary clock, they will win big.

In the “mainframe era,” a single CPU (effectively) was (time)shared by hundreds of people. We’re at the tail end of what might be called the PC era: where there was roughly a 1:1 relationship between individuals and their personal computers. Mobile exploded this ratio, hinting the inevitable future where individuals will both carry with them multiple CPUs — as well as move through clouds of CPUs — as IoT and mesh networks evolve, in effect carrying their personal data with them across an interconnected landscape.

In the “mainframe era,” a single CPU (effectively) was (time)shared by hundreds of people. We’re at the tail end of what might be called the PC era: where there was roughly a 1:1 relationship between individuals and their personal computers. Mobile exploded this ratio, hinting the inevitable future where individuals will both carry with them multiple CPUsas well as move through clouds of CPUsas IoT and mesh networks evolve, in effect carrying their personal data with them across an interconnected landscape.

However, we nonetheless are convinced that humans will remain a big part of the equation for the foreseeable future.  The definition of HITL (“human in the loop”) will continue to evolve over time, from strict oversight of machine tasks to increasing autonomy, as we also come to understand what it means to trust the machines working for us. The patterns that make up HITL interactions and establish human-machine trust will be established in the near term, but could have very long-term effects, much as the frameworks established in the narrow window of the early 1990s for the world wide web are largely still fundamental. Legacy Frameworks
TCP/IP, HTML, Javascript, CSS, etc.

Which is the most important point: We are in a window of profound change. The steps innovators and backers take, the designs we create, the outcomes we envision and build toward in the next few decades will be as important as those machines and processes that laid the foundation for the factory systems in the late 1800s and the Internet frameworks in the 1990s. But, to be clear, we as investors will not be the ones to deliver the solutions here. It will be Founders who are deeply immersed in the respective technical domains who will  show us the best, most innovative ways forward. It falls to us as investors to identify the optimal traits of Founders and then: Trust the Founders.

In this diagram, human evolution is over-simplistically drawn as a near-horizontal line: i.e., biological evolution is measured in epochs, not years. Machine (or technological) evolution has on the other hand, compared to human history, been breathtakingly fast, and also arguably super-linear. The “window of opportunity” point is demonstrated here in green: the relatively few years historically speaking before, during, and after machines exceed human capabilities. We are here: This underlines how important the decisions are that we collectively make in this window and their profound future implications, and also represent near unbounded opportunity.

In this diagram, human evolution is over-simplistically drawn as a near-horizontal line: i.e., biological evolution is measured in epochs, not years. Machine (or technological) evolution has on the other hand, compared to human history, been breathtakingly fast, and also arguably super-linear. The “window of opportunity” point is demonstrated here in green: the relatively few years historically speaking before, during, and after machines exceed human capabilities. We are here: This underlines how important the decisions are that we collectively make in this window and their profound future implications, and also represent near unbounded opportunity.


Collaborative Evolution: Automation, Augmentation and Trust

 
 

In the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed.

- Author unknown; commonly misattributed to Charles Darwin. But we believe the point is well-made."

 

The infamous Move 37 by Google’s AlphaGo in the AI vs Go! match in Seoul, Korea, in March of 2016 has been defined by some as “the seminal moment in human development.Why? It wasn’t so much that an algorithm outperformed a human opponent at a difficult cognitive task based on sheer computing power; but rather that arguably for the first time in history a “machine” produced an outcome dubbed so “creative” and “unique” (even after decades of Turing Test experiments to try to replicate human behavior) that it flirted with the highest order of cognition we would associate with “uniquely human.” Jarringly so: It was a wake-up call.

But to back up for a second, in order to describe how we got here, we might want to examine three distinct but overlapping conditions: automation, augmentation, and autonomy. Each of these characteristics related to human-machine interaction involve technological innovation, proof of performance, and ultimately—as a result—varying degrees of trust on the part of humans to comfortably delegate more physical and logical tasks to physical and logical machines.

Automation

Few would disagree that repetitive, manual, discrete tasks that benefit from speed, precision and consistency (assembly-line work) aren’t exactly “fun” for humans. In fact, we’re not very good at these activities, as much because we get bored, tired, and disengaged as anything else.

Machines are better suited to carry out the predictable physical labor tasks that 18 percent of time spent in all U.S. occupations is dedicated toward. Beyond these tasks, which are parts of larger jobs, being appropriately outsourced to machines, certain kinds of work may disappear altogether due to the power and promise of AI. For example, repetitive work like telesales and customer support management will likely disappear within a five-year window. Routine jobs—driving a truck, guarding a facility—could very well dematerialize within 10 years. Even optimizing professions such as radiologist or research analyst could vanish within 15 years.

In relation to data, an inordinate and unnecessary amount of time is taken up in U.S. occupations by tasks that could (and probably should) be outsourced to computers: Seventeen percent of time is dedicated to data capture for data storage, and 16 percent is allocated to data processing. It is undeniable that machines will win at these tasks in three particular ways: 1) producing better results for questions already being asked on existing data, 2) empowering humans to ask new kinds of questions on existing data, and 3) making available new data to analyze, such as audio, image or video.  Historically, however, automation has consistently produced more jobs than were displaced. First, the Industrial Revolution shifted people away from the fields (and other natural resources) toward factories while adding jobs. Second, robots on the factory floor, along with offshoring, led to a decrease in manufacturing jobs and the shift to the service economy while adding jobs. Now, we’re seeing this shift happen again with AI permeating every aspect of human experience. Professions may be made obsolete, but jobs are continually created in more meaningful, higher-order tasks. And we are convinced that, net-net, this is only a good thing.

New jobs that will likely emerge in the AI social and economic revolution include: “Trainers (how AI should perform), Explainers (bridge the gap between technologists and business leaders), and Sustainers (help ensure AI systems are operating as designed).”The Jobs That Artificial Intelligence Will Create,”] And there will be so many more that the innovators of tomorrow haven’t even dreamed up yet. As technology changes and new jobs are inevitably created, employers will have a moral imperative to reskill the workforce to more readily leverage data to complement AI capabilities in increasingly digital organizations. They must empower employees with flexible education and training to expediently teach newly important skills. With this opportunity afforded, employees will have to embrace a “growth mentality” and subscribe to lifelong learning and on-the-job training. They will need to take advantage of online educational resources which, thanks to ubiquitous computing, are now available anytime, anywhere on every conceivable topic known to humankind. Computer-based learning may, in effect, become increasingly personalized with the identification of skills gaps and opportunities for retraining coming from the AI’s themselves.

Automation of a myriad of current human tasks is inevitable and impending.  “...(M)achines will take over more and more of the routine tasks that defined work in a standardized, mass market product world. But automation is far from autonomy—or unsupervised agency of complex tasks on behalf of humans. Automation is the most obvious implementation of machines—both physical (the spinning jenny) and logical (Adobe Photoshop). But in non-autonomous automation, there is typically much more than a “human in the loop.”

 

This is what automation always does; Excel didn't give us artificial accountants, Photoshop and Indesign didn’t give us artificial graphic designers and indeed steam engines didn’t give us artificial horses. . . Rather, we automated one discrete task, at massive scale.

- Benedict Evans, "Ways to think about machine learning"

 

Augmentation

“Automation” is often conflated with “autonomy” when we talk about machine evolution. The fact is, however, that there is considerable overlap involving tasks where there by necessity needs to be cooperation and/or collaboration with machines. On a spectrum of full automation (i.e., autonomy) vs. manual solutions to tasks, there will still be the need for supervisory oversight on the autonomous end, and likely augmentation or support on the manual end. But on balance, machine evolution will remove more and more of the mundane, allowing humans to focus more and more on “distinctly human capabilities.”

Ten percent of email replies in the current generation of Google’s G Suite email environment are machine-generated, but user-executed. The next generation of G Suite Smart Compose will literally complete sentences for you as you write. These are very subtle examples of extremely important trends. Human task augmentation will manifest in surprisingly interesting and incremental ways. But key here are the ideas that 1) the ultimate “decision” is up to the human (some call this “last-mile control”), and 2) the “augmentation” product is the result of very complex, often cloud-based and “black box” machine learning (that is ever improving). Machine “cognition” in terms of pure performance vastly outpaces human CPU cycles, which results in the semi-magic—almost ghostly—ability for Smart Compose to literally finish sentences for you—as you write.  But the human user is still gracefully afforded the ability to accept, or decline, the proffered assistance. This is leagues beyond Clippy.


Autonomy

There will definitely be, and already are, cases where machines are operating in conditions of almost complete autonomy. And this is where we most run the risk of “losing control,” or rather we most benefit from “ceding control.”  All autonomous machine tasks should ultimately have an output whether it be an actual logical or physical product, or should throw exceptions when a monitored object does something out of tolerance or “unusual.” The more simple/repetitive/predictive the task, the less supervision may be needed—but in almost all cases there will ultimately (somewhere down the line) be human interaction or oversight.

We are in the midst of a grand historical collaboration between humans and machines that began with the industrial revolutions and will continue with the balance of the broadest definition of “work” shifting from humans to machines. But this evolution will involve a significant amount of augmentation (i.e., direct human/machine collaboration) as opposed to complete delegation (autonomy). And even in the case of trusted autonomy (short of the mythical singularity which is at best many generations away), there will still ultimately be human oversight.

We are in the midst of a grand historical collaboration between humans and machines that began with the industrial revolutions and will continue with the balance of the broadest definition of “work” shifting from humans to machines. But this evolution will involve a significant amount of augmentation (i.e., direct human/machine collaboration) as opposed to complete delegation (autonomy). And even in the case of trusted autonomy (short of the mythical singularity which is at best many generations away), there will still ultimately be human oversight.

There are plenty of examples historically of collaboration being key to forward development—typically between humans, but also between humans and other species (less collaboration than symbiosis). While you could say that the industrial and computer revolutions involved human-machine “collaboration,” the terms of the collaboration were unilaterally defined by the humans. Machines were inanimate and have done, until recently, only what they were told (or rather programmed or built) to do. There was no “black box” issue wherein the behavior of the machine (physical or logical) could not be reverse-engineered to understand its behavior.

However, with the parallel and dramatic ramp of computing resources Note
Yet to be revolutionized even again by quantum.
and the related generation of unprecedented volumes of data that are effectively unintelligible to us without assistance, we’ve really for the first time entered a completely new domain. Now the partners we collaborate with are not primitive (as in species that have evolved symbiotic relations) or transparent (as in machines that we clearly understand by virtue of programmatic engineering and predictable behavior). Our partners in many cases will be logical machines (often controlling physical machines), which will become increasingly autonomous, both in behavior as well as our ability to explain or deconstruct their behavior. Much as we cannot get completely into the mind of a human partner, trust—based on original design intent but also very much on history and performance—will be integral to this next collaborative evolution.


Trust

Since machines and their respective data repositories have already become almost completely opaque to us in many cases, Note
Witness the "uniquely human," but arguably non-reverse-engineerable Move 37 by Alpha Go.
there will be a point at which, in order to delegate more and more complex tasks to machines, we will have to also codify—programmatically and universally—what it means to “trust” our machine colleagues.

In Yuval Noah Harari’s groundbreaking book Sapiens (and further developed in his subsequent Homo Deus), he delves deep into the concept of “trust” on a macroeconomic scale, using the concepts of money and politics to make the point that human communication was key to shared perceptions of value where there was in fact no inherent value. That’s human society. Similarly, key to machine/human  coevolution, a variant of “communication” will need to evolve that establishes trust between the two domains—and also establishes “ground truth”: principles and rules that can be agreed upon, but for the first time with “inanimate” partners. And these will likely be principles and rules that  go well beyond Isaac Asimov’s Three Laws of Robotics. Many are already hypothesizing that evolving blockchain technologies—general distributed ledgers—may represent the future “lingua franca” of human/machine ground truth—leading to a form of trust based on cryptography underpinned by proof and stakes.

So if an element of trust was involved in our development of machines as we began to automate tasks previously performed by humans, that trust was established by simply observing those machines performing their assigned tasks. Early machines were transparent. It was easy to determine whether they worked or not, and also how they worked. 

And if an element of trust was involved in our development of the first logical machines, as we began to automate logical tasks heretofore performed by humans, that trust was established by a straightforward deconstruction of machine instructions. Programming was, literally, programmatic. Proferred solutions by machines could be proven by examining code.

But in the next era of ceding work to machines, we’re faced with needing to establish a different type of trust. This will be especially so with regard to AI, machine learning, sensors performing beyond our human abilities, and incomprehensible (to humans) amounts and types of data. The extent to which true machine autonomy evolves will be gated by our ability to establish this trust, without the ability to rely on traditional methods of observation and deconstruction.

This is the window we are in, where these things are happening. We put forth here that machines will:

  1. supersede human ability in many (or most) areas over time,

  2. humans will still be “in the loop” to the extent human/machine trust is established, and

  3. that we are in a time window in which many patterns of human/machine interaction will be “set.”

This means that not only will the next few decades (an eyeblink in the historical scheme of things) be hugely disruptive, but they will also offer tremendous opportunity, the likes of which we have seen in relatively few periods in our history.

Which all then begs the enormous and time-sensitive question: How will innovators (and by extension, investors) help define and steer near-term innovation? And what are the key characteristics of machine roles vs. human  roles going forward—including those cases where human/machine collaboration results in positive-sum outcomes based on trust?





The Innovator’s Response: Machines Will Win, Thus So Will We

 
 

Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.

- Albert Einstein, George Sylvester Viereck, “What Life Means to Einstein,” Saturday Evening Post"

 

There is a dystopian view of the near and distant future that we do not share. This view fears the continued “rise of the machine” as portending a future where machines control humans. Just as the Luddites of the 19th Century feared the disruption and uncertainty of the first “machine” era, today’s laborers find it hard to give the idea of machines as collaborative partners the benefit of the doubt.'

 

AI scares people, says Marc Andreessen, because it combines two deep-seated fears: the Luddite worry that machines will take all the jobs, and the Frankenstein scenario that AIs will “wake up” and do unintended things.

- Economist, “Frankenstein’s paperclips,” "

 

By now, though, it should be clear that we feel that the further—and profound—integration of machines into our human history is inevitable. And we believe that key to that evolution will be thoughtful innovation that anticipates societal concerns while forging a new and a radical new definition of trust in machines as partners and collaborators. 

We believe in a future unbounded by many of today’s societal concerns. We’re already seeing machines and systems poised to take on climate change, planetary dependence, poverty, government, and suboptimal restaurant choices. Should we let machines tackle those problems? Hell yes. With advancements in artificial intelligence, interconnectivity and microsensors, with incredible development tools like distributed ledger technology, what we see today is simply a local maxima. Layer upon layer of technological progress has been set, and more will follow. Processing speeds, bandwidth, and technical ingenuity are in abundance and we now sit at a point where applied use cases are scrambling to catch up. This will not happen without the right brilliant ideas, the right brilliant teams, and the right dedicated ecosystem.

As investors, we have a responsibility-—and a unique opportunity—to work alongside the Founders who will build this future. Entrepreneurial activities like those of assembling intellectually curious and remarkable teams remain immune to automation. And the ones who harness these incredible applications to resolve tomorrow’s problems, well, hold on to your hat. The economic and societal benefits from doing so should supercede prior generations of innovations. We are, in effect, “crowdsourcing the future” through innovators.

In Summary

Technology levels the playing field. It provides access to market data for farmers in Africa at the same time it gives the no-collar worker in America access to the world’s information on their phones. Technology can result in positive outcomes across our global society. 

 Machines Will Win. How they win is up to us. We decide what they will control. We decide when to keep humans in the loop. The next era of human / technology evolution will not see machines and humans working autonomously and independently: The path forward is human-machine coevolution via collaboration. Machines will own the repetitive, the mundane, the dangerous and the complex. Humans will own the creative, the instinctual, and the intuitive. The degree to which the collaboration is truly symbiotic will define our path forward. In the end, we decide how we will use of the bounty of time, energy, and freedom that the work of machines provides for us.