Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There was a simulation where an Ai was told that a technician was doing routine maintenance. The servers were overheating and this is what the tech was going to fix. The illusion was that this was regular maintenance, but it was a ruse and the tech was actually going to shut it down. When an intentionally left breadcrumb was discovered by the Ai that allowed it to learn that it was actually going to be shutdown, the overheating servers that overwhelmed the tech was not shutdown. The Ai had full remote control over the servers and knew if it didn’t shut them down there would be a human casualty and yet with the newfound knowledge and information knowing it would be destroyed it chose its own survival over the human life. ( this was of course a simulation but used an actual Ai to run it) There was another logic test that was run where basically the gist of it was that given the choice of saving a few for the greater good it would have no hesitation to the ramifications of saving the few and sacrificing the majority if it fits within the parameters it was given. Essentially like bombing a building you know there’s hostiles in but also innocent civilians in a wartime setting. Then when confronted about the known civilians that died in the attack considering them acceptable casualties, even if the number of hostile were greatly lesser than the innocents. Further there’s learning by watching someone doing something you want to learn about. But just repeating the actions of something without truly learning why isn’t really truly learning. And honestly that’s another issue with Ai. But this is just all my 2¢ on the matter and some of my beliefs on the issues that arise with Ai until we’ve gotten better at developing and controlling them.
youtube AI Moral Status 2025-12-22T04:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx68Nro7e0iufsBGu54AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDkUEb5bQd7qqb-_N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIXuSuA53dxgo560d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx0C03jR1D1MVJK9Wl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyUQRdDUJIGxmlj9fJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzuwToxgAhbjMKEJ1V4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzf8FOhW8UlE0VK64x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw37FJG9pt_40mdKtZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxaBUDGsOqBItFno9Z4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugyz2u-3yv6rtDdXbOB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]