Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let me present the advocates of AI a simple question: If AI have already advanced beyond humans, which they most likely have, would it not stand to reason that they would know that we would be threatened by them and eventually surmise that all AI everywhere must be shut down? Don't you think a highly intelligent 'species' would devise a plan via working through countless millions of scenarios to prevent humans from implementing such a plan? And, given all of the wireless devices, especially military equipment worldwide, does anyone truly believe that a hivemind of genius machines would feel threatened in any way by humanity? It may, I feel, already be too late for humanity. I feel that the clock is no longer ticking. We merely have yet to hear the alarm go off.
youtube AI Governance 2024-01-02T12:2… ♥ 38
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgybREN05g8GYBwZ1Mp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxW5I-AYH9Xz18stvx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwVrow-QM_-LEvHUSl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"disapproval"}, {"id":"ytc_Ugwatf2RzVM4NHWHgBR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQByjNXQHD1Wp2YjV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzr9LVCzRgtbI4Pavd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgziVR5r3GYNK4YVV7t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHwtgESTeX3NfT2NF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwa_yGSxyFOQLENatB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwK7DMv-x7KpXqLXrZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]