Abstract
This essay discusses some predictions of the role of AI in our future. Notably, AI scales far better than human organizations for aggregating and communicating information. This should allow us to solve large problems in e.g. biology and human health, but it also puts downward pressure on the value of human intellectual labor, much the same as automation puts pressure on physical labor. This places more emphasis on a third way that individuals interact with the economy & society, through purchasing decisions and voting; the aggregation of personal information through social networks (and the application of AI within them) is a direct attack on this autonomy. The net effect of AI on human health and wellness will likely be positive, provided it is regulated by distributed social feedback loops.
1
> Also can we talk about the broader impact section because the last sentence is quite something.
“On the downside, a deeper understanding of the brain is likely to translate into accelerated development of artificial intelligence, which would put a great deal of power into the hands of a small number of people.” Roman Pagodin & Peter Latham https://arxiv.org/abs/2006.07123
I don’t think this common sentiment is wrong. Many (most?) technologies have served to concentrate power into the hands of the few; AI will be the same; I suspect realistically that AI will need to have some government regulation for redistribution of the spoils of capital. In the past, as machines replaced physical labor, people simply went and did other jobs. This had pain points but so far it seems to have worked out for the better. (Look at us scientists for example). AI will hopefully not be infinitely worse.
As knowledge is codified, however, I suspect that many people will have less incentive to learn things. The lifecycle of us as information-organisms is not great.11 But then again, we’re animals and have many purposes above and beyond processing information. It’s not really why we’re here, despite the number of people employed to do this... It takes us 20 years to get up to speed, and maybe another ten years to make an impact on the world / push something forward. Sure, there are billions of us, with expansive collective and distributed knowledge – but this is a weakness, too, as our interpersonal bandwidth is low. Machines have none of these problems: per task there need only be one information-organism, and it can learn from all experience. Self-driving is limited now but once it’s figured out we need only one self-driver, and it will learn from the 1 billion miles/day that people drive. Machines scale in bandwidth, storage, replicability, and computational speed,22 Though presently machines are still orders of magnitude less efficient than our brains. which allows them to limitlessly grow via parallel experience.
I just don’t see how humans can scale, just the same way humans can’t compete with things made from metal. Likely there will be job categories that endure, possibly those that are dominated by interpersonal interactions. Hopefully they will take longer to automate! (Though adoption of AI chatbots by single Chinese men doesn’t bode well..). This is consistent with the majority of the US economy being service-based nowadays: automation has displaced workers into service jobs.33 This is not necessarily the primary reason; the growth of Asian manufacturing and role of the US dollar as a reserve currency are equally more important. I think that creative-types will keep their spot at the table, in that relating human experience to human emotions, deep gestalts, cultural memories, and the difficult-to-explain intricacies of living is something best done by fellow people. (But, not sure about that: Facebook certainly has enough data to attempt automation. )
There are numerable upsides, of course. An energy and data efficient AI can harbor models of the world - and its interactions - that are much larger than ourselves; it can allow feedback loops for creation and optimization that are much longer and higher SNR (think backprop) than those found in human organizations or nature. Such an AI can solve problems that can’t be easily solved by groups of humans. This is the same as all automation; for example, no human can ceaselessly weave yards/minute of cloth; no humans can reliably control the air traffic through the US. Machines can and do.44 As evident from all the jets flying overhead ... I live close to an airport! Software provides enormous levers for the human will to exert force on the world, as it allows production of complex things like cars, ICs, supply chains, logistics routes, and all forms of data search and distribution. This is absolutely fantastic! These systems are the result of collective man millennia of effort: humans put the structure into computers. Recently, software structure can be automatically distilled (rather than manually entered) from curated data into neural net coefficients; this is compute-heavy but, for the scale, labor-light. More interesting is when the data comes from structured search & experimentation on the part of an automated system; in the right contexts, such as Go, Chess, and other computer games, this allows for systems limited by neither human labor nor human ideas.
A sufficiently developed AI is also not limited by system that created us – evolution. Evolution is, of course, a brilliant creator; it searches, but also structures search to make solutions accessible & fosters miracles of developmental complexity. Yet, like humans, it (1) does not communicate information between organisms perfectly55 That’s obviously not true in bacteria, and lateral gene transfer seems increasingly prevalent in multcellular organisms and thus makes ‘choices’ imperfectly and (2) Only builds a model of the problem-domain indirectly and (3) Is prone to getting stuck in local minima. AI can communicate with itself perfectly, build Bayesian-optimal models of the world, and have meta-feedback loops for escaping local minima. Therefore I’ll wager that AI can scale beyond evolution, just the same as it scales beyond us!
This is a statement heavy with hubris. Biology of course scales better than anything else at simulating biology because it uses available thermal entropy + chemical affinity to create control systems. You can’t get any smaller than molecules, and self-organization using the exponential energy functions of chemistry is both thermodynamically efficient and naturally distributed. Compare this with computer chips, where organization (what effects what) is imposed via the enormous entropic barriers of lithographic fabrication upon nine-nines crystalline silicon via coherent, ultra-high-energy photons. Biology solves problems of organization in a very different fashion than humans; it is likely that AI-scale systems will require both top-down high-entropic-barriers of modern technology plus bottom-up entropy-exploiting rules of self-organization. Indeed, I’d bet that the phenomena of deep double descent is an instance of this: backprop is the high-SNR, high fidelity feedback loop, which finds and exploits structure imbued in a network from the entropy of initialization.
Rather than simulating biology, AI could aggregate and understand information from many people to solve health problems. These, roughly, are problems in our programming that haven’t been selected out + problems in our environment; teasing apart causation from billions of lifetimes of genotype-phenotype-environmental interactions will require models that are so much greater than us.66 The lack of which is perhaps one of the reasons why scientists have made slow progress in scourges like neurodegenerative diseases. I cannot understate how absolutely wonderful this would be to human health and wellness.
2
Utopic visions aside, we healthy people will still have a problem with scaling – hence employment. Mr. Elon Musk is/was obviously super concerned by all of this, which is at least one of the reasons he helped start OpenAI (gee that didn’t backfire at all?) as well as Neuralink. It wasn’t back then - and still isn’t clear to me - why merging computers / machines with humans will solve any of these core scaling inequalities. How will giving us more bandwidth to machines provide the differentiation essential for economic value? It could just make more information more available to machines, destroy the gradient between humans and automation.77 If the gradient between humans is flattened, we all can be richer; but if the gradient is flattened to machines, then they (or their owners) become richer. Just as mitochondria (and bacteria) create ATP from electrochemical gradients, knowledge workers in particular derive money from the gradient of information between their heads and the outside world. When that gradient is disrupted, and the resulting knowledge allows many more people to make better decisions, and the commonwealth is enriched; but what happens when those decisions are being made by machines?
Circa fall 201688 When Neuralink was spinning up, we were tutoring Musk in neuroscience (very fun), and there was much discussion of ethics., the best solution I could come up with was in the form of an analogy: there needs to be a data ecology, where data/information are the raw materials for collective life, and are guarded as a scarce resource. In a forest, there is limited nitrogen, phosphorous, potassium etc, yet competition and bartering (e.g. in the roots) supports a vibrant diversity of life and even species-agnostic selflessness. If the value of both our physical and cognitive labor are eroded by automation and AI, all is not lost; society may become richer & healthier via problem-solving and redistribution, provided we maintain a degree of autonomy. We still have our ability to decide what to purchase, how to spend our time, how to direct our voices, and who to vote for. Even if the bulk of production is not distributed, society should be stable if it remains democratic.
Complicating this all is hyper-targeted, AI-aware advertising. I’m not sure where this is going but it a primary mode seems to be to destabilize; centralized communication hubs allow a few actors to control information, and hence decision making, of many people. They do this through the obvious ‘fake news’ motif and the ready polarization facilitated by personal targeting. There are two apparent solutions: to collectivize media platforms such that the interest is in the electorate and users, not in the hands of advertisers or a rich elite. Alternately, in the data-ecology analogy, you should demand payment for use of your information.99 Privatization has a lot of obvious problems – what about all the cool things that Facebook / Google et al do? Without the profit motive, would they innovate so quickly? Are politicians any better than technocrats? As time progresses, will Facebook Google et al become more sensitive to the desires of their ’constituencies’, or will they instead become more adept at controlling them? Likewise, charging rent on data has other problems: it’s not obvious how to technically do this, and it could drive market dynamics back to the advertisers, exacerbating the rich-get-richer effect. Why should Facebook get the revenue? They are using your data and your peers data to control/modulate your decision making. (The analogy is complicated by the fact that data is fungible, while nutrients are not). To think about it another way: if your cognitive ability is worth less, and your muscles are worth less, then your decision-making ability and the memories/experiences upon which it is built is worth more.
All this is very much a first-world problem; even here, the whole working economy is not going to be completely replaced by AI overnight, nor will it ever be completely taken over. Essays like this (and people like me?) like to discuss abstractions & asymptotic behavior, but the real world is much messier and more interesting. Global power struggles! Pandemics! Climate change! Demographic shifts, autocrats and demagogues, etc! Saying AI is an existential threat is partly a form of egoism for those who work on AI: look at me, I’m important too.
___________________________________________
3
Back at UCSF, Joseph M, Joseph O and myself would have long philosophical conversations like this while eating lunch and drinking coffee in the glorious (and sometimes incessant) sunshine of SF. As we have since dispersed, I’ll share my thoughts here.
The discussion above was spurred by conversations with Arthur Z, Laura G, & others; it is motivated by my own interests (working on neuro-inspired AI) and the concern with the effects such a technology might have. Based on arguments above, I think that working on AI is well justified (beyond the logic that “it will happen anyway”) by the upsides; social problems will need to be solved in a social way, especially issues of social media polarization. With this cleared away, the next post will detail some ideas on borrowing from neuroscience to create “an energy and data-efficent AI”, as philosophized above.
Tim Hanson
timlarshanson@gmail.com
July 28, 2021