The remarkably prophetic capacity of humans to imagine and harness the future has shaped the evolution of humankind. Straight-line extrapolations and nonlinear predictions based on present-day facts have helped civilization discover mesmerizing technologies first described in science fiction novels and cinematic features from bygone eras. Therein we encounter thought-provoking ideas similar to the innovative products that we take for granted today.
Historically, creative writers and movie directors have had an innate talent to envisage the future rapidly accelerating toward us. Even comic book writers introduced Dick Tracy’s 2-way wrist radio in 1946 that mirrors our indispensable Apple smart watches. When The Jetsons premiered in 1962, they foresaw flat screen televisions, video calls, drones, holograms, flying taxis, and digital newspapers. The moon landing in 1969 was loosely divined in From the Earth to the Moon published in 1865. With uncanny accuracy, the book’s author described astronauts in an aluminum capsule being launched into space from Florida. In 1984, William Gibson’s novel Neuromancer conjured the World Wide Web, hacking, and virtual reality. Steven Soderbergh’s 2011 film Contagion depicts a quick spreading worldwide virus that enlightened the World Health Organization to declare Covid-19 a pandemic nine years later.
Consequently, business and industry strategists and public policymakers have increasingly looked to science fiction to see what lies ahead of the curve in a reimagined world. For example, the Financial Times recently described how defense establishments worldwide use visionaries to prognosticate the future of warfare based on fictional intelligence. Some of their predictions are captured in Stories From Tomorrow, published by the United Kingdom’s Ministry of Defense. Applying “useful fiction” (admittedly with non-fiction embedded within the author’s story telling), this compendium of eight narratives sparked interesting insights about tomorrow’s revolutionary technologies. Unlike WarGames, the folly of war was not one of them.
Instead, the authors explore how theoretical quantum computing can render sophisticated cyber defense systems, digital electronic communications, and supercomputers utterly defenseless to a future enemy attack. Current countermeasures such as artificial intelligence (AI), algorithms, and encryption methods are yet no match for a quantum apocalypse — unless the wizards at the Defense Advanced Research Projects Agency (DARPA) have something up their sleeve. The increasing automation of the world’s military includes carrier air fleets driven by AI-enabled aerial vehicles within the next decade. The U.S. Navy recently took delivery of an unmanned, fully autonomous warship that can remain at sea for up to 30 days.
Based on their raw computational power, quantum computing may deliver solutions to the world’s most pressing problems including income disparity, poverty, climate change, and disease prevention. Stories From Tomorrow also focus on data models powered by AI and machine learning tools that can produce digital twins capable of simulating real-time counterparts of physical objects, processes, entities, and even humans. These forward-looking models can “simulate everything from business ecosystems, military campaigns, and even entire countries, allowing deeper understanding as well as potential manipulation.” In Virtual You: How Building Your Digital Twin Will Revolutionize Medicine, authors Peter Coveney and Roger Highfield explain how scientists have brought glimmers of hope to digitalizing identical copies of people (commonly referred to as doppelgangers) to trigger early detection of disease. Akin to a parallel universe, it allows doctors to prescribe custom-made medical protocols based on a patient’s chemical, electrical, and mechanical systems.
Medicine has already sequenced DNA, mapped out the human genome, edited genes, and created stem cells in ways that allow for personalized medicine to mitigate symptoms and eliminate potential ailments in their preliminary stages by drilling down to their cellular level (think nanomedicine). Imagine fabricating molecular proteins based on a patient’s genetic code and identifying biomarkers to accelerate the speed of targeted drug delivery systems to counter Alzheimer's, heart disease, stroke, and cancer. The moral quandary and ethical catch-22 surrounding the manipulation of embryonic fetal cells in the quest for genetic perfection is a bit dystopian and reminiscent of historical attempts at racial superiority. The dangers of genome editing were the basis for the science fiction thriller Gattaca (1997) where human sperm and embryos were manipulated in a laboratory using technology like the modern-day CRISPR. Even more disconcerting is the ability of nation-states to create deadly pathogens and other biological agents aimed at a specific ethnic group or race of people.
The blinding pace of scientific discovery significantly compresses time, making it difficult for humans to absorb and make sense of it all given our limits of cognition. “Biological evolution happens over thousands of years, but [the] digital evolution has occurred in less than one generation,” according to Professor Roy Altman of NYU. Compare and contrast this to The First Industrial Revolution in 1765-1869 (characterized by mechanization and steam power) and the Second Industrial Revolution in 1870-1968 (e.g., new energy sources, internal combustion engines, airplanes, telegraph, and telephone). Both had one-hundred-year runways to gradually diffuse and allow people to slowly adapt and accept the pace of change.
Conversely, The Third Industrial Revolution was condensed to forty-one-years (1969-2010) with the advent of computers, the Internet, 4G, automation and robotics, space expeditions, and biotechnology. These force multipliers set the stage for The Fourth Industrial Revolution or Industry 4.0 as coined by Germany in 2010. Here we have seen in a matter of just thirteen years a dazzling array of co-dependent technologies working together to digitalize the world economy with a software-first approach to manufacturing. This represents a major transformation from traditional manufacturing because today a product’s hardware and software (i.e., sensors) are inextricably intertwined and indistinguishable from one another.
Many of our solutions to historical challenges beget new challenges requiring more creative solutions. Despite increasing world living standards, industrial revolutions have had disastrous effects on air and water quality that now require environmental remediation. Weapon systems designed to protect us can also destroy us, and lead to our very extinction. Computers and the internet have made us vulnerable to cyber-attacks on critical infrastructure, intellectual property, and financial systems. The point here is that rarely do novel ideas arrive completely formed and readily applied to everyday life without unseen implications down the road. Yet, this is the essence of progress.
Now generative AI is on everyone’s mind. To soften anxieties rightly shared by many regarding its repercussions, it is often referred to as applied or computational statistics that is extracted from large sets of variables and weighted data. Chatbots like OpenAI’s ChatGPT (in collaboration with Microsoft) and Google’s Bard use large language models (LLMs) adept at constructing comprehensive essays using grammatically correct text and responding to user questions. It also helps doctors make more accurate diagnoses and programmers generate computer code. Keep in mind that their underlying neural networks are not always accurate. Worse, chatbots can lead to disinformation campaigns (e.g., deep fakes) and sway elections that are hallmarks of democratic institutions. The best kind of propaganda is the subterranean variety that goes undetected or is undiscernible even among critical thinkers.
You do not have to be a registered technophobe to observe how the tectonic plates of fictional worlds versus the one we inhabit are rubbing up against one another. Look no further than how AI aided by machine learning can assign inanimate objects with sentient qualities capable of human emotions and free will. This blurs the line between humans and machines. Futurist Ray Kurzweil, a prophet of both techno-doom and salvation, points out the concept of singularity where science and technology outstrips human intelligence. This phenomenon could place human reason and decision making at the mercy of metadata and computer chips.
According to the Pessimist’s Archive, a compilation of "fears of old things when they were new,” many of our present-day concerns reflect past worries such as those noted in Mary Shelley’s Frankenstein published in 1818. Although naysayers were often wrong about the potential downsides of novel technologies and scientific discoveries, it does not mean that they are wrong about the future today. In the final analysis, however, science fiction is about changing today’s world and reimagining our future, reminding us that what was once impossible is indeed achievable. Using fiction and scientific reasoning to invent tomorrow is where the seeds of innovation are planted.
Sign up today for a free Essential Membership to Automation Alley to keep your finger on the pulse of digital transformation in Michigan and beyond.