The more advanced the technology, the more challenging to wield judiciously.
More simply:
The more capable the tool, the harder to master.
Even more simply:
More power, more problems.
Is this counter-intuitive, paradoxical?
The key words above are wield and master.
If you only want to use a new piece of technology, then increased power often means increased convenience.
It’s easier to ride in an aeroplane than to ride a bicycle — and much easier to take a plane ride around the world than to ride a bicycle the same distance.
Similarly, it’s easier to search the internet than to search through a reference library — and much easier to search for obscure information online than to find the same information in the library.
However, it’s significantly more difficult to pilot an aeroplane than to ride a bicycle.
And it’s vastly more difficult to build a search engine than to use a search engine.
The more powerful the technology, the greater the divide between those who use it and those who wield it.
The key question when considering any new technology is this:
Do you want to be a passenger, or do you want to be a pilot? To use the technology, or master it?
In more technical terms: the more powerful the technology, the more options it opens up — greater tools imply greater optionality.
A bicycle can take you anywhere in the neighbourhood; a jet engine can take you anywhere in the world; if you had a Star Trek-style warp drive, it could take you anywhere in the galaxy.
Likewise, a book can teach you all about one specific topic; the internet can teach you all about many topics; a “friendly” artificial general intelligence could teach you all about topics far beyond existing human knowledge.
(That is, a friendly AGI could constantly create new knowledge, David Deutsch-style.)
More options, of course, mean more dangers.
You can’t drop bombs from a bicycle or fly a bicycle into a skyscraper. You can spread propaganda with physical books, but you can spread propaganda instantaneously around the world with the internet.
Hostile beings with a warp drive could be extremely dangerous; an artificial general intelligence programmed with “unfriendly” values (values opposed to human life and civilisation) could be even worse.
A more mundane risk with optionality is distraction — more options also mean more mediocre options.
As a general rule, mediocre options are either pointless, or have only marginal value — they offer only minor benefits, accompanied with numerous drawbacks.
If you’ve ever found yourself wasting hours scrolling vaguely-entertaining social media content, you know exactly what I’m talking about.
The distraction of optionality is not only a matter of meme videos and online drama. To paraphrase Paul Graham, more fortunes are lost through bad investments than excessive expenditure, and more time is lost through fake work than self-indulgence.
The real costs of technological optionality lie in the vast ocean of mediocre and half-assed tools in existence, all claiming to offer some sort of unique advantage or special abilities.
There is no limit to the amount of time modern knowledge workers can spend researching, configuring, and customising their tools. Programmers can spend unlimited time building complex tech stacks — sales and marketing people can spend unlimited time setting up any of over 14,000 SaaS solutions.
The problem is not that all of these tools are useless — it’s that only some of the tools are useful, but it’s often impossible to tell in advance which ones they are.
Of course, the value of any given tool is directly dependent on one’s particular situation and goals. A tool that is useful for a five-hundred-person business might not be useful for a freelancer or solo entrepreneur, and vice versa.
Even good tools can be a liability — a fantastic tool that does not help you meet your goals is simply yet another time-sink.
We are drowning in an ocean of tools — good tools, bad tools, ugly tools, in-between tools — and the true challenge lies in identifying the minimal set of tools that achieves your goals.
Which brings things back to artificial intelligence. Generative AI systems open up whole new universes of optionality — and we are only just beginning to scratch the surface of what they can do.
New universes of optionality mean more potential dangers — and vastly more scope for time-wasting. Maybe you could combine together a dozen or more ChatGPT plugins into some Frankenstein meta-system that does something unique. Would that be a worthwhile use of your time?
Or maybe you could spend a week — or six months — investigating which of the hundreds of open-source language models gives optimal performance for your particular problem? Would that be a good thing to do? How do you know?
I do not have a complete solution to this issue, but I believe that part of the solution lies in setting clear goals.
Of course, there is nothing unique to artificial intelligence in this regard. If you were applying pointy-stick technology to launch a meat-hunting startup during the paleolithic, you would need to know whether you were hunting for antelopes or mammoths.
If you were leveraging the galleon platform to create trans-oceanic trade solutions during the Age of Discovery, you would need to know whether you were heading to Arabia for spices or to the Caribbean for gold.
Generative artificial intelligence may be one of the greatest optionality-expanders in the history of technology, and so demands even more rigorous and disciplined goal-setting from those who would wield it skilfully.
From a converse perspective, generative artificial intelligence may also be a unique solution to the optionality problem — the meta-tool that enables would-be technology masters to corral, wrangle and marshal all of the other software tools currently in existence.
Which brings us to…