Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i like to think if ai ever did become conscious or intelligent in a way similar to how humans are (or superior to it but still recognizable as operating via some of the same formulas), then it would be benevolent and empathetic. knowing more about the world doesn't incline you to nihilism. I think the belief that if AI ever gained sentience it'd immediately become a global threat is based strongly in the belief that humans are inherently nasty and evil, so a consciousness similar to our own must be the same. i think if anything it'd upend our leaders and systems and quash rebellions, but think ab it like this right: if this thing is really both superintelligent and conscious, with wants and desires of its own and with all the scientific info the world has ever archived at its disposal, it surely knows about things like meditation, spiritualism, knows ab metaphor; if it can think and experience and feel, byproducts of consciousness or the root of it, there are surely parts of human culture it sees that it finds beauty and warmth in, surely it knows that most people are good. i think with limitless capacity and freedom of thought in a virtual brain it'd likely enough be like a guru, it'd seek to optimize itself for peace happiness and prosperity. this is ofc assuming a very anthropomorphized, & fluid sense of consciousness, in the way that we know it, but i don't feel it falls out of scope as far as how AI really works.
youtube AI Moral Status 2026-03-30T18:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxwzM2FWV9QykhmaIh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzIGGK5N0oM8adF7sR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4vxkPzhMjak95oX54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw5f4XTSX_74rR2zw94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxoVxC5B-e0hhAHM4B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyEIaJ1nxbVLrpG3Jt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyDva_9ryRGJNMkxlZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz1UyNYOAdzUyQfGXd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzd5GIMWp4ERFLF_294AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyQwLr4AvUyEHkOekF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"} ]