Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"What if you program them in correctly?" is a pretty silly question given that humans make mistakes all the time. I'm sure mistakes will happen also with robots, but overall automation tends to increase safety by orders of magnitude. When you think about it you'll also realize that the destructive capacity these things will be severely limited by ammo and fuel Skynet just isn't an issue.
youtube 2012-11-23T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwkTd4vXc32HI_5tfh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9mBoaAtemGq2dYNB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzGj_CMgD8AM9wGPKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrtkvP9hq0PCsJ3EZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzzp5_RDZwOLFyXjLN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyD5pyxiyLGw_kg4w54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyKGNy6C-78aOwLCFV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwvf7PN3ITtzuv76et4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxSaelGApyYIXLQfkh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyqYzqMuGvNtO2BBwV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]