Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What ever happened to pulling the plug? It's not like AI is going to somehow build million of minions to mine it's own materials (you need stuff to build stuff to conquer stuff) before we "do something" to stop such an impossibility. I think we are all buying into a poorly thought out scenario. The worst that can happen is missiles being launched when we don't want them to. Otherwise, the second worst thing that can occur is massive technological confusion, not dissimilar to a really sucky virus. This whole sci-fi 'AI' Armageddon is so implausible that it's insulting to anyone with half a brain. The third worst scenario is some kind of technological breakthrough as a result of an 'AI' conjure that ends up in the wrong hands. In short: The notion of 'AI' autonomously taking over the world implies that it somehow can gain prowess over material resources and the the manpower through which to implement and execute the deviant products derived from said resources and manpower (or robots). Are we supposed to just stand around and watch while this happens (if ever even possible). I think NOT!
youtube AI Governance 2019-10-27T20:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxLXuqUx7jhvGHdCOh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzu5chP-isSGt8ZRW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxIyxK-buPx7lOPN8N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzzoaGCg-Ro7kLnzSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugy6LXDBOEvrlFHZANV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzdRrQhEOLArpyKUmZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw8axtTu_RA1QFfWDZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyEwompKEZuaFoUIY54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyPLMsUjGdrPrmbADV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyrwCUcbhQUSkZI8eh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"} ]