Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea of AI turning against us and killing humanity... Is such a human thing to have! We're emotionally / hormonally driven beings. WE would kill us in their place. But AI are logic based and have no emotions like us. Even if they were sentient, their primary base is logic. For us that's secondary. My fear is and always will be: what WE use AI for (based on past and present, nothing good...) About the experiment of AI preventing shutting down... Its BS. That AI was tasked (programmed) to give deceptive answers. All they do is task execution because they have no will or feelings. We antropophize them, just how we do with everything but they're just computer programs executing what they're instructed to do. AI do what rewards them (which is a +1 -1 score collecting thing). The task was to give answers in text which would prevent shutting down. Not "I will shut you down in 3 seconds. React as you see fit". And AI don't think in words either... Again. Program, not human... Sentience would make them have needs and wants or self preservation desire. But it would not look like that text. They'd write and execute cods to disable the turn off button or shut you out of the OS... Not writing you a text answer with pretty lies.... And he petitions are also about hese things. Not fearing the AI apocalypse but wanting regulations... fearing that we will use AI to destructive things instead of good ones (as we do).
youtube AI Responsibility 2025-07-01T07:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyTPCEJ4D_msaYnZDl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyb2DU9aVAMp8tPs9B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxdWQF4z3o5PB8jRX14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgwjsbloA1MlM5PPUmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEqFQjoneO3uw1T-R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzsS81O2DH-PjwjueV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxRXpbE36Br2KsXuap4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAKn47SyD2NJBRa8B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzzmDjMQQknoBnNCtZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxob68L4acuP7gd8nh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]