Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube AI Harm Incident 2025-07-24T18:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxzcvtslR6_zz2d4sl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfGwt7oGyvEW4cPOR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwRwvg0nKCESfN3LKB4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzdUkbjNgUjgM8rI4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz_vXdTO7c5OcWbiU54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw3sSWR15T6ynwLyxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxkceJPH9qE07lwSrh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzAUdnoUmmp0eiOjzB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzaa-LR-q8CvhzTN9B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyQMOghOF2nxgnzIgV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"} ]