Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The other issue is that Artificial Intelligence's are... well, artificial. Their "wants" or reward functions are often way, way simpler than ours. What a human wants at any given time is usually extremely layered, right at this very moment I have food, water, and shelter, but I want emotional connection, a sense of security, new experiences, and a sense of mastery over the things I do. One of the results of having such a layered system of wants is that people weigh these wants against each other and can seek long-term gratification over short term gratification, or even forego "fake" gratification entirely in favor of whats perceived as more legitimate. If you think about it, drugs are basically brain hacks, they stimulate your brain's reward function, but yet most people don't take them out of social stigma and a fear that any sense of joy from them will be fake. Any existing AI, or AIs built with simple functions at their core, would probably be unable to resist pulling a lever that they knew would directly stimulate their reward function.
youtube AI Moral Status 2023-08-22T02:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzLiCoxtyHU-8qTtFt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwry411H4TNsgx44bJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgxLcR5sBrwJ9Z7JO9x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwrutfkpih9N4QDmEF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyRjTBDoHkiFQy1Xdt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzUt0c01T7on8sEOAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwlKYy5bdkaEKcHDMl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxvSXcFb6g27DqCZHZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKkuVnzsZAC5OAv8p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyaWQ8OcZzEx03fQZF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"} ]