Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The difference between googling something and asking a chatbot is that, when googling, there's a higher chance of stumbling across information you don't already know that could make you reconsider. If he was googling "what to replace chlorine with" and found a bunch of articles on sodium bromide, then he might have googled "sodium bromide food" or "sodium bromide recipes" or "sodium bromide taste good?" and then found his way to websites talking specifically about the consequences of ingesting sodium bromide. Of course, it's not guaranteed, and he could have ignored those pages. But getting a variety of information sources presented to you independently, without the chatbot's built in deference or the attempt to maintain a context window across what are in reality different topics, means you're more likely to stumble upon some story of some poor sod who'd previously gone down the same rabbit hole and has already experienced the consequences for you to learn from. Not gonna find that article, that video, that Reddit anecdote, if you're relying only on one source of information, the chatbot, for all your information!
youtube AI Harm Incident 2026-01-09T07:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy-RekoDvgI-rCbfeh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgwmLBZOytjav2-DwCt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_PJSxm_4PCc3JWMN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhRHOI1SB1byVJnQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGL_JXmVBhqqcmFNJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzN_NlqYvsDQ_cMe5h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7aWHk02Nh6tM30P94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzHbXSN-y1ouB8prpd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGhxeIqzYm3soFtjt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxw0l9IxTYV5WsLcrt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"} ]