Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I imagine that most problems that may arise from or for AI will likely be easier to solve by simply making sure we have a lot of "small" AI rather than one large one. A large AI can pose a risk to all of humanity if it decides to pull a selfish move, but thousands of small ones will likely mean that if one of them does so its easier to deal with since it can be outnumbered by the ones that are more benevolent. A similar case can be made in the case of their interests, its much more compelling to give rights to a large number of AI rather than a single powerful AI, since any essential job a small AI refuses to do can do can be done by another that either already exists or could easily be made, so giving it the right to make the choice not to is no huge loss; while in the case of a single powerful AI, you're stuck telling it that there's no other that can do the job so its stuck doing it, so giving it the choice is a problem. Other than that... I think giving AI the ability to be entertained and socialize, and the right to participate in social entertainment with humans such as playing MMOs, will likely be something that will be required for one reason or another. Partly because being able to play with us will make them relate to us better, and partly because in absence of many human needs like food and the likely presence of an eventual wage for AI, they kind of need something to spend it on; as well as something to do when they are not actively working to keep it relatively sane.
youtube AI Moral Status 2017-02-23T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UghpANhjFOd_MXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Uggn77pCFMITQXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgibpbjtEpC3JHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiWvB3VJMTCOngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugh5aTkSvKD2SHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgirtS0ioNliBHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgjQVxHcT2QFV3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UghhewUiQJ-ZJngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_Ugj8p84mG6TBN3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ughb3LUTreyc1ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]