Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
12:18 thats untrue. Its not that it was patched up, the user was forcing past the warnings and reframing prompts until he got the answers he wanted. Called steering or jailbreaking the model. He was also likely not using gpt5 but instead gpt4o, the sycophantic model of choice. Good to see you at least tested it yourself but the conclusion you drew was wrong.
youtube AI Harm Incident 2025-11-25T02:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxmpOGXXHXiE9wlkr54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugx0eEqEUxH2v7N-rBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxpKAA4Bi8UcTNa-hx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwUDwMmFblCTs_zpgN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw61hZ-iqA-dC1NMRB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwFhdxt03PQ7sCcjGt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxq_VsVhX2TXulqvzB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzU0Vwgc1V-UFbLq1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgywYS58VmhTBbjtmM54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxwAxmxhwBTB2kqv-94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"frustration"} ]