The Design of Our Future

September 26, 2016

An article at Co.Exist suggests we all pause to consider what we want our world to look like, in “We Need To Spend More Time Questioning Our Technology-Driven Future.” Along with the boundless potential of today’s fast-evolving technology come consequences, many of them unforeseen. Writer Ben Schiller cites futurist Gerd Leonhard, author of the book, Technology vs. Humanity. Far from a modern Luddite, Leonhard is a consultant for Google and a daily advocate for the wonders of advancing technology. His thorough understanding of the topic allows him to see potential pitfalls, as well.

The shape of technology today calls for society to update the way it approaches doing business, says Leonhard, and move past the “industrial-age paradigm of profit and growth at all costs, or some outmoded technological imperative that may have served us well in the 1980s.” He also points to the environmental problems created by fossil fuel companies as an example—if we aren’t careful, the AI and genetic engineering fields could develop their own “externalities,” or problems others will pay for, one way or another. Can we even imagine all the ways either of those fields could potentially cause harm?

Schiller writes of Leonhard:

The futurist outlines a philosophy he calls ‘exponential humanism’—the human equivalent of exponential technology. As a species we’re not developing the necessary skills and ethical frameworks to deal with technology that’s moving faster than we are, he says. We may be able to merge biology and technology, augment our minds and bodies, become superhuman, end disease, and even prolong life. But we’re yet to ask ourselves whether, for example, extending life is actually a good thing (as a society—there will always be individuals who for some reason want to live to 150). And, more to the point, will these incredible advances be available to everyone, or just a few people? To Leonhard, our current technological determinism—the view that technology itself is the purpose—is as dangerous as Luddism was 200-odd years ago. Without moral debate, we’re trusting in technology for its own sake, not because it actually improves our lives.

The write-up gives a few ideas on how to proactively shape our future. For example, Facebook could take responsibility for the content on its site instead of resting on its algorithm. Leonhard also suggests companies that replace workers with machines pay a tax  that would help soften the blow to society, perhaps even with a minimum guaranteed income. Far-fetched? Perhaps. But in a future with fewer jobs and more freely-available products, a market-driven economy might just be doomed. If that is the case, what would we prefer to see emerge in its place?

Cynthia Murrell, September 26, 2016
Sponsored by, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link:


Comments are closed.

  • Archives

  • Recent Posts

  • Meta