Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As I've observed, if you ask ChatGPT a yes/no question, it's likely to answer yes, regardless of the facts. Asking something that begins with "Wh", without suggesting any answers, leads to more reliable answers. (Of course not reliable enough for your health to depend on it.)
youtube AI Harm Incident 2025-11-25T11:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw5XavMPOrqoNQbJcN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzH1T3KQb-RXGou5jp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyPNnMo4Pv8WmN6_MF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz1L3R9kLniqVxk_7t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwvAIanqsO0KgzLG3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxfDRy-y4QE3ADB7Dh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0pVmXEaGRA3p9bbN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx6qfRL64OyqbvQNvd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyaL7r1Y8hjr8AX4EN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzb6puQ35fYqHYaOHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]