Digital Delphis: Predictions More Reliable Than Checking Pigeon Innards, We Think

July 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

One of the many talents of today’s AI is apparently a bit of prophecy. Interconnected examines “Computers that Live Two Seconds in the Future.” Blogger Matt Webb pulls together three examples that, to him, represent an evolution in computing.

7 22 digital delphi

The AI computer, a digital Delphic oracle, gets some love from its acolytes. One engineer says, “Any idea why the system is hallucinating?” The other engineer replies, “No clue.” MidJourney shows some artistic love to hard-working, trustworthy computer experts.

His first digital soothsayer is Apple’s Vision Pro headset. This device, billed as a “spatial computing platform,” takes melding the real and virtual worlds to the next level. To make interactions as realistic as possible, the headset predicts what a user will do next by reading eye movements and pupil dilation. The Vision Pro even flashes visuals and sounds so fast as to be subliminal and interprets the eyes’ responses. Ingenious, if a tad unsettling.

The next example addresses a very practical problem: WavePredictor from Next Ocean helps with loading and unloading ships by monitoring wave movements and extrapolating the next few minutes. Very helpful for those wishing to avoid cargo sloshing into the sea.

Finally, Webb cites a development that both excites and frightens programmers: GitHub Copilot. Though some coders worry this and similar systems will put them out of a job, others see it more as a way to augment their own brilliance. Webb paints the experience as a thrilling bit of time travel:

“It feels like flying. I skip forwards across real-time when writing with Copilot. Type two lines manually, receive and get the suggestion in spectral text, tab to accept, start typing again… OR: it feels like reaching into the future and choosing what to bring back. It’s perhaps more like the latter description. Because, when you use Copilot, you never simply accept the code it gives you. You write a line or two, then like the Ghost of Christmas Future, Copilot shows you what might happen next – then you respond to that, changing your present action, or grabbing it and editing it. So maybe a better way of conceptualizing the Copilot interface is that I’m simulating possible futures with my prompt then choosing what to actualize. (Which makes me realize that I’d like an interface to show me many possible futures simultaneously – writing code would feel like flying down branching time tunnels.)”

Gnarly dude! But what does all this mean for the future of computing? Even Webb is not certain. Considering operating systems that can track a user’s focus, geographic location, communication networks, and augmented reality environments, he writes:

“The future computing OS contains of the model of the future and so all apps will be able to anticipate possible futures and pick over them, faster than real-time, and so… …? What happens when this functionality is baked into the operating system for all apps to take as a fundamental building block? I don’t even know. I can’t quite figure it out.”

Us either. Stay tuned dear readers. Oh, let’s assume the wizards get the digital Delphic oracle outputting the correct future. You know, the future that cares about humanoids.

Cynthia Murrell, July 28, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta