We Need Positive Visions for AI Grounded in Wellbeing
Introduction Imagine yourself a decade ago, jumping directly into the present shock of conversing naturally with an encyclopedic AI that crafts images, writes code, and debates philosophy. Won’t this technology almost certainly transform society — and hasn’t AI’s impact on us so far been a mixed-bag? Thus it’s no surprise that so many conversations these days circle around an era-defining question: How do we ensure AI benefits humanity? These conversations often devolve into strident optimism or pessimism about AI, and our earnest aim is to walk a pragmatic middle path, though no doubt we will not perfectly succeed. While it’s fashionable to handwave towards “beneficial AI,” and many of us want to contribute towards its development — it’s not easy to pin down what beneficial AI concretely means in practice. This essay represents our attempt to demystify beneficial AI, through grounding it in the wellbeing of individuals and the health of society. In doing so, we hope to promote opportunities for AI research and products to benefit our flourishing, and along the way to share ways of thinking about AI’s coming impact that motivate our conclusions. The Big Picture By trade, we’re closer in background to AI than to the fields where human flourishing is most-discussed, such as wellbeing economics, positive psychology, or philosophy, and in our journey to find productive connections between such fields and the technical world of AI, we found ourselves often confused (what even is human flourishing, or wellbeing, anyways?) and from that confusion, often stuck (maybe there is nothing to be done? — the problem is too multifarious and diffuse). We imagine that others aiming to create prosocial technology might share our experience, and the hope here is to shine a partial path through the confusion to a place where there’s much interesting and useful work to be done. We start with some of our main conclusions, and then dive into more detail in what follows. One conclusion we came to is that it’s okay that we can’t conclusively define human wellbeing. It’s been debated by philosophers, economists, psychotherapists, psychologists, and religious thinkers, for many years, and there’s no consensus. At the same time, there’s agreement around many concrete factors that make our lives go well, like: supportive intimate relationships, meaningful and engaging work, a sense of growth and achievement, and positive emotional experiences. And there’s clear understanding, too, that beyond momentary wellbeing, we must consider how to secure and improve wellbeing across years and decades — through what we could call societal infrastructure: important institutions such as education, government, the market, and academia. One benefit of this wellbeing lens is to wake us to an almost-paradoxical fact: While the deep purpose behind nearly everything our species does is wellbeing, we’ve tragically lost sight of it. Both by common measures of individual wellbeing (suicide rate, loneliness, meaningful work) and societal wellbeing (trust in our institutions, shared sense of reality, political divisiveness), we’re not doing well, and our impression is that AI is complicit in that…
