Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its not hard to imagine the more sinister possibilities of AI imitating or overriding human inputs/control.. causing ballistic missile launches, cyber warfare, sabotage of infrastructure etc. The problem is, where do you even begin with regulating it? How would that be possible? Google (if im not mistaken) already announced it had created AI that had become self-aware. The creators didnt even know what that truly meant.. so they asked their boss, and he had no clue what to do either🤷‍♂️. Thats the scary part
youtube AI Governance 2023-04-18T02:4… ♥ 26
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzXkHpPn0K0cIjDJCd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxa72Cmd8J-v-jkHid4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy2mTsSHDqbq8L2Zkt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyJUhKtM5vJtlQY6wp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzD8bW9tXJCXeMyXjt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyhKWmue9T4R75G7aN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyKml59ZRagcQC3Htp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw8PUT3_KNVkaZEOqV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxXuFidDu2oSyw58qV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzdxxsn3fNcR10aE2h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]