Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't even understand our own consciousness. I don't think AI can ever really be conscious / sentient. It's a simulation. It will only have simulated consciousness to the best of our abilities. Pretty much sending messages to itself the same way that we do. That kind of gives it its own inner monologue perhaps but, it doesn't really exist.
youtube AI Moral Status 2025-06-03T16:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyUwsBNqBSoUENt9Hl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0dle2AslAWoCj0vR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzukWbOyPucVjtXflF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7Gdccmd4Ddy73t7Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy3anUeUrp_s4BhaC14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzoONBZNVaE4k2wKBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxxOagt1Ac8e_16kkJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw51h7e3QxV3eWAUNF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzVjZS1NrCa-2ou3y54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwmxVoXlngOissEQIF4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"liability","emotion":"mixed"} ]