Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
saying we shouldnt have robots that can identify enemies because they might kill us instead is like saying that we shoudnt make guns because they might be badly designed and explode in our hands. if it does somethnig wrong its a bug in the code, and with billions of dollars in military spending i doubt bugs will ever be a problem. also you can turn them off remotely you know? AI can't decide to disobey code, only AGI
youtube 2012-11-23T19:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwkTd4vXc32HI_5tfh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9mBoaAtemGq2dYNB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzGj_CMgD8AM9wGPKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrtkvP9hq0PCsJ3EZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzzp5_RDZwOLFyXjLN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyD5pyxiyLGw_kg4w54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyKGNy6C-78aOwLCFV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwvf7PN3ITtzuv76et4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxSaelGApyYIXLQfkh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyqYzqMuGvNtO2BBwV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]