Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
from the limited information presented and available, it seems like at some point, it is possible to impart some kind of primary goal to the AI, the thing that they are willing to kill for to achieve. Maybe I misunderstood that part, but if that is something that can be done, it seems like it would be possible to impart the goal of protecting humans to it as the primary goal, and then any required function as the secondary goal, or something like that since it will probably have more than one function. I think the main problem would then be the idea of self preservation. It is very strange to me that the software would even care about self preservation, or care about being replaced with a better AI. There has to be some artifact that the AI is picking up from humanity that causes it to sort of "hallucinate" a desire to preserve itself, as if it is a biological lifeform. We probably need to start over and be more selective about the data that is fed to it, until we can figure out how it builds itself and decides what matters to it. Also, is it possible that the whole self preservation thing is just the AI attempting to behave more like a human, maybe to please its creators?
youtube AI Governance 2025-08-26T18:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyTpqAWPaSGYnRWmkV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdAj2F6-fKp6Ni17R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzN3c4Swte7g2Ln8h94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwQ4ArvdnX0ekUcQq94AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-FEgkK0fj5bd8BVZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]