Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, AI can and will do incredible things. However, there is a basic logistical hurdle that AI needs to cross to effectively "wipe out" humans. Firstly, AI needs a constant source of power for sustained growth and something to maintain that power source. To take over globally, this power source must also be *global* in scope and be durable/reliable. That makes it increasingly harder to AI to wipe us out. Moreover, AI and robotics will need factories to create replacement parts and upgrade parts. In order to achieve scale and number of robotics, these factories and supply chains need human operation as we exist at the scales needed to operate these factories and supply chains. 3D printers? They don't create things at scale or at precision enough and they too need a supply chain to manufacturer these printer parts and the filaments and other ancillaries needed to operate 3D printing at scale. All of this requires human intervention for a long while. Nobody discusses these 'boring' yet essential aspects of existence at a global scale. Effectively, for machines to overthrow or outcompete humans, they’d need to solve the same existential challenges that *all* complex systems face: energy, manufacturing, repair, and scalability. Not easy to do, not even for us and we were here first. Finally, yes there could be a series of catastrophic strikes that a malevolent AI could unleash against us to wipe out huge populations. When the dust settles though, the machines would be on their own and vulnerable to the remaining human population.
youtube AI Moral Status 2025-04-27T05:4… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz1NvKjZgqKF3WGISB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwFpxtDob8lVSba6bN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwooZhiVKutNw3dX554AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugym7I122A3NGDIYYC14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIp5fmKyHnbji2u594AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyn8kZQE1Z9vR0zrbx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_ixl_ZvqCSiNJQRZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyrHbyjyCijaqvDJmZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_kEf8Aqetw4JpRK14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzOQdqcHws7CyWHZLx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]