The California Supreme Court’s decision finding a constitutional right to same-sex marriage also predated the federal decision, and reflected how, to channel William Gibson, the future is often already here, but it’s unevenly distributed.
Think of all the people who are going to be affected by and playing a part in what AI Now is trying to understand: a girl in a Ugandan village struggling to learn how to read. The grade-school kid in an advanced industrialized country fretting about his future employment prospects.
A policymaker deciding what to do about big concentrations of market power over what we buy, how we find information, and how we share ideas and news.
Consider a world of relatively sophisticated AI. Human cohesion will depend in no small part on how well society will fare when those who worship emerging AI share the planet with those who feel some AI applications making claims on us deserve recognition, those who feel this is essentially an animal-welfare issue, those who think any concern for the “Welfare” of an inanimate object is insane, and those who could care less.
“You must understand much of how a smart machine was designed so you can know how to reconcile our demands with your will or values, or at least how to deliberate with it. Otherwise you’ll be bending to its will and the will of unseen people and machines who designed it, in all likelihood, to control you. You must understand that the machine is rarely as secure as you are told it is-so you should question how much you can trust me. Whether I was designed to have a fiduciary obligation to you will be hard to tell. Whatever a smart machine was designed to do, it’s far from obvious that it will work exactly as designed in any event.”
“Above all, you must observe how smart machines are changing you-so you can change as you’d like instead of becoming what you fear. If you let me, I’ll help you become what you want, and if you wander, I’ll be there to show you the way. So how do you want to change?”.
Do we have a few decades? Is it wise to pretend the conversation will be easy or the benefits will be widely distributed? That somehow the players with the most concentrated power can be trusted to behave responsibly? That it can be put off because of how much technical work remains to build on the rudimentary AI that surrounds us, or because we can afford to be optimistic about the machines we’re creating?