Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
16:57 his question at the end was wrong. It was supposed to be "when you say you're NOT conscious, you could be lying". He forgot the "not" which made the chatbot have a way easier time responding with "yes"
youtube AI Moral Status 2025-03-12T21:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzh968-VBpeE6CqMit4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugybc1vHnDp5li6Mde54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz4ZaE7G4jMfMpAQG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNDJh61G5zvQJmnR14AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRH99N6C7ntfbuhGR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxR1oXwXYFae9NXxXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwC4KA5N8IapUWjUa54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugz6kmNqn-88oSKDUP14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx4r8Eg6Wb7Hayez4V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxpspb2m3ARg0Rw3294AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]