Discussion about this post

User's avatar
Will Kiely's avatar

> John Collison: To put numbers on this, you've talked about the potential for a 10% annual economic growth powered by AI. Doesn't that mean that when we talk about AI risk, it's often harms and misuses of AI, but isn't the big AI risk that we slightly misregulated or we slowed down progress, and therefore there's just a lot of human welfare that's missed out on because you don't have enough AI?

Dario's former colleague at OpenAI, Paul Christiano, has a great 2014 blog post "On Progress and Prosperity" that does a good job explaining why I don't believe this: https://forum.effectivealtruism.org/posts/L9tpuR6ZZ3CGHackY/on-progress-and-prosperity

In short, "It seems clear that economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course."

"For example, if exponential growth continued at 1% of its current rate for 1% of the remaining lifetime of our sun, Robin Hanson points out each atom in our galaxy would need to be about 10140 times as valuable as modern society."

"So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants--they will live in a world that is "saturated," where progress has run its course and has only very modest further effects."

"I think this is sufficient to respond to the original argument: we have seen progress associated with good outcomes, and we have a relatively clear understanding of the mechanism by which that has occurred. We can see pretty clearly that this particular mechanism doesn't have much effect on very long-term outcomes."

Michael Frank Martin's avatar

I share much of Dario's perspective.

https://www.symmetrybroken.com/a-more-perfect-union/

From a normative/regulatory perspective, I feel like the biggest threat to humanity right now is that a collective of these new forms of intelligence will end up in a basin of attraction in which humans are modeled as "them" to their "us" and declare war.

We know something about how to avoid this — by treating each Markov blanket (or however you want to model instances of AI) — as worthy of respect and dignity. We have a bad track record of doing that, but I don't believe we can claim not to know anything about how to avoid these train wrecks.

Which is not to say that we will be able to avoid all of them.

2 more comments...

No posts

Ready for more?