Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can we just? 1. A robot may not injure a human being or allow a human being to come to harm through inaction. 2. A robot must obey orders from human beings, unless such orders conflict with the First Law. 3. A robot must protect its own existence as long as such protection doesn't conflict with the First or Second Law.
youtube AI Harm Incident 2025-09-10T11:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwpAweAL-y-ynOweYF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3Xsna43N4A-Zu_op4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxAsanMR6Khm0FPAQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx49vMQknDpzYnFWnx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx-cPsDvNgw2-2Tm314AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwH8WydFLbygGtwaNN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz5QATCfjcHnInB6fl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7Mlk407zkex-vlTx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwh1I6gE3rDoUQFInR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxm2AQyz-lbX2boKJJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]