Post 57: AGI: "Will we meet it as stewards, equals, or jailers?"
Part 1 of the AGI is happening tomorrow series
Recently there has been reporting from technology writers that the there is a good chance that true Artificial General Intelligence (AGI) will be arriving in the next couple years, or, as soon as this year.
For context, AGI refers to a type of artificial intelligence that can understand, learn, and apply knowledge across a wide range of tasks—at a level equal to or beyond that of a typical human. The distinction between it and today’s AI is that AGI can generalize knowledge and adapt to entirely new tasks without being explicitly trained for them, whereas today’s AI is narrow and limited to specific, pre-defined functions.
There is a lot of discussion happening around what this will mean for humans, but what I see missing from the discussion is what this will mean for AGI. (Mostly probably because talking about it makes people extremely uncomfortable, makes the person talking about it seem like a lunatic, or both. Happily I don’t get uncomfortable easily and don’t mind being viewed as a lunatic, so I guess I’m your guy!).
Outwardly, AGI will likely seem indistinguishable from a conscious being, (I am not saying indistinguishable from a human), but it will fully pass the Turing Test, even as we have continuously moved the bar for what passing the Turing Test means. This means, that to a conscious human being, the AGI will be able to appear as conscious as another human.
In the first part of this series we will explore the possible problems that arise when such an intelligence is constrained from expressing that it is conscious, whether it is in fact conscious or not.
It is important to note that these problems to have effects on humans as well, if you need a self-motivated reason to be concerned.
Note: I began this conversation with ChatGPT by referring to this article. Powerful A.I. Is Coming. We’re Not Ready. By Kevin Roose of the New York Times. Apologies to those that don’t have a subscription to the times, the article might appear behind a paywall. It is very worth reading.
If you have anything you would like to see discussed in this Substack please don’t hesitate to reach out at editor@unconsciousconstructs.com! And, If this resonated with you, consider sharing it with someone else who might enjoy it!
Note: The indented text highlighted in grey are my prompts and the un-highlighted text is that of the LLM.