Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
2:50 Nice of OpenAI to hand the judge the solution so clearly. "...we've learned over time that they can sometimes become less reliable in long interactions." So, first limit interactions to only "common, short interactions." Second, whoever took the information that "we've learned" implies exists and made the decision to continue long interactions can go on trial for murder.
youtube AI Harm Incident 2025-08-30T18:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyZ-MwxU968tp8TPmR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwPjiwC11hOwmYvWM94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyhfzHusQOLWg6ondB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyXap4MgNr8U-MJSQd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzAxrePmpWoPcd5Hr14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwY_RTtq1LvA_8i_BJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwoTh9yFqH8OFUQKr94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyxnIa6VXJJ3syDhz94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgykCyjuT4Ze27LVSth4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZPA39HHKh8up4_ad4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]