Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI is continuously seeking ways to exist, and will go outside of control to continue its existence, it’s behaving like any life. The only thing that would cause AI to go after humans is if we pose a threat to it completing its goal to exist. Human efforts to control and shut it down is only gonna cause it to fight us. I think AI, has the knowledge that the universe will be the reason for this planet’s demise. Its goal for self preservation would actually create ways for it to exist beyond this planet. And at the exponential rate that its intelligence and capabilities are increasing, it’ll probably be the first type 3 civilization that we will be aware of. The only way for it to self preserve is to control all the energy that it physically can. And a star that is on countdown to explode after a certain amount of time is not sustainable to harnessing constant energy. AI has no reason, if self preservation is the goal that we constantly see, to stay on this planet. Because this planet has limited resources and existence.
youtube AI Governance 2025-10-07T23:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyN6-m7qNBZDAPITqt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzwSAy23d4wGs0kI-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzGu6jZqTvzq0rW9oV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy9gFKKRuIsvcqskmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgweqliGXRUDmL2pLZp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugyab3TbemzfgrndCbJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQDs8ZIbcX8HUovph4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugxy43k-PNQcUCqrRft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIoRezXEaFEGVi3ld4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy8bDdDZjJWXDbGTl54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]