Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No oversight of A.I. for a decade, you say. What could possibly go wrong? If those snippets of conversation are verifiable, they should get a couple of excellent lawyers and bankrupt ChatGPT. How hard is it to program the bot to NEVER recommend self-harm? Note A.I. bots below.
youtube AI Harm Incident 2025-08-30T14:4… ♥ 28
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz-RRamOpJ2shFkw1d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzOtnveip8dbXFU8q14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"compassion"}, {"id":"ytc_UgzA-GFURwf5AtMR9K14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy8Y-1nH8WztKJ1F3h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzdwqa9R1ZqZYzOqOJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxo3zoqDu61o3711Oh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzh_3YPlci3cgFf6T94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzas5JnjN7pHDjeurt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwIllSwuOfKm852e_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx3rQDqY_87cGMSp2d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]