Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think what he says doesn't make sense, when he says that AI is becoming more and more intelligent and surpasses humans in intelligence. That is, it has 150IQ, 170, 250 and then "takes off". What good is a car without a driver, if the pilot no longer uses the car. And if the man feels and the car doesn't, or if the car "felt" better, it absolutely couldn't do it without a driver. Where does the car "feel", from its own consciousness in relation to the man or only from what the man feels, being incapable of doing so. So how can it become more intelligent than you, since it depends on you and especially since it doesn't feel, it doesn't have affinities, it "can't make mistakes" because it can't recognize them. Everything seems like madness that doesn't make sense. Only based on your observations does the AI ​​discover its mistake, and only in a closed system like chess, when it can "see" itself. In an open system where imagination and desires drive existence, what does it do without humans?
youtube AI Governance 2025-12-04T14:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzeR1W5VDn--za2c8x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxl9wpvvs9iFpVseb14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz99OwhUI3RKR4bi5h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzmO8O1N2RjBbwxVdl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxvVSn9B4V0m92rK6B4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyINU-raCjuO7T0GgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyxYnT6cvUSYcGGPM94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxoIzPw8kdAvShfoW94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyFZF6HXxSTfYp_-il4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxYGlJSn9cEXEitIXd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]