Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The guy was a studying computer science so he knows how to prompt it and teach it to sound like him and even direct it on the outcome, in this case is sad and tragic. Yes, ai needs rules and regulations but if you tell it to do something it will most likely do it, to varying degrees. More of a mental health issue than an AI issue. Sad he was speaking to gpt and not his mum, friend, coach, whoever.
youtube AI Harm Incident 2025-11-09T17:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy59ASvxPQl3Zp1QcR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}, {"id":"ytc_Ugz-8RkmZM8p8jbanAB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxIgUACE--0miJdOmh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzxKq5zo2070x8LjdF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxCqS96Ma3D9RZVPth4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgysV_kE6WfXY2it1Zx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}, {"id":"ytc_UgwGTnkJBReHO7157Nh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLDV8fGbJ-PXWcpDh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3esNMLc-6zybiJ9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}, {"id":"ytc_UgyoQoLbrPUuDWVJNxl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]