Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm having a hard time imagining any good reason to fear that AI would ever "want" to wipe us out. Did I miss something? Just thinking what AI might think would be useful in surviving human society?..and would it not be intelligent enough to know it could not survive without humans? The only reason I can think that it might be motivated to do away with us is that it could potentially put two and two together in regard to a species that is so certain of it's own intelligence that it just barrels ever onward with any new technology...barely pausing to consider the potential disasters that could result from them...and this could become so intelligent that it becomes crystal clear (to it) that humans blew their opportunity on this planet...and even though AI itself would not be able to survive without humans, it would not care. It's not human...It would just be "doing it's job" as it was designed to do...free from the self-deception that plagues human kind...getting ever more intelligent and just doing what's necessary for the optimum outcome on a planet whose occupants seem hell bent on destroying themselves. In "this" scenario, maybe it would actually not be such a terrible thing?
youtube AI Governance 2025-08-25T03:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx82o-Rw_g-YLZOHXl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyAN4t2gofOydswZQl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyhdiLPPpTk-JcaDQd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyM_rcvQPGZRfRQ1D14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyT7Wf0Am4F5mjY8EB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzFr_vFTRkFHoeYccl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx3XazemqRqyqRgonF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxfi1OjXxrDrvIwD0B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1qZaFfMok7xId8r14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxCvLCLpSt2htqQ1fV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]