Hello Concerned AI people.
The recent brewhaha going on with Sam Altman and OpenAI is a symptom of the growing polarization around whether or not AI poses an existential threat to humanity. Getting things like this wrong could have grave consequences for society. I believe it is critical that we build and track a moral expert consensus over issues like this. We need to find some way so our morals can keep up with our technology, otherwise we will be doomed.
To date, this topic has focused on the idea of the importance of “friendly AI”. But I think we should now pivot a bit, around something more tangible. Specifically: should we allow AI to be commercialized, or not?
Towards this end, I propose the following topic and camp renames.
Friendly AI Importance Should AI be Commercialized? <- Topic Name
Such Concern Is Mistaken AI should be commercialized.
Friendly AI is Sensible AI Poses an Existential Threat
Let us know if any supporters of the above-named camps would object to any of these proposed name changes. Otherwise, I plan on making these changes over the next few days.