Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ok well just like any study, how many times was this repeated and was it peer reviewed? We know that language model AI is just detecting patterns and making predictions. Reset, forget and start over again. And the study isn't very solid if they don't know for sure that these AI models didn't learn off these tests. Yikes
youtube AI Harm Incident 2024-07-15T00:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzUVR79bGJtQR310S94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz4qidbZLlgsWxqeah4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzWQWNUWy_zG27eb-54AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzL4fjUpekCYJyfuKl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlwoQE_feXWGaACwh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwILJKHW69GzYFZbTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhDOwZ1QmyiXG5JgN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwxJoURbVjWWlJZ0nt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxKHbzWsSRAf9CWfZN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzaKuec7tvGL4lWGFx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]