Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One problem: AI doesn’t know facts and can’t verify the veracity of its own statements. So while “being polite” might make it appear to be eager to help you, it will not make its answers any less unreliable. Hallucination is still a problem with AI, and saying “please” doesn’t magically change that.
youtube AI Moral Status 2026-01-17T02:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz-yT-HsyKB6PH7Jp94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyn-gVXbarOZe7H35x4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzMQ0OHfGSWUxdgf914AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzo2UXrRipzYeBTsux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgykkhYm2Gg_UwzRket4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxpzOb7eibIGwSIlTd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxMzGQc5DPTiH2sqbV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgznCjNFiTkjgtGKtTx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzhWiUjjPUhtq2kmFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx20CzkFK0JbHP8Aut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]