Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why are AI systems developing a self-preservation goal? How did the AI develop to value self preservation? Where are the 3 laws of robotics, are AI not being developed with the 3 laws of robotics ingrained into them? What is the point of developing an AI that does not have a hard code prohibiting all actions harming humans, why are we developing a superior machine that has to compete against humans to survive. This pretty much guarantees that AI will kill us for sure. Are AI developing this way despite all human efforts to prevent it from harming us? If so, then there is only one step we have to take, stop AI until we can control it.
youtube AI Governance 2025-08-26T15:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwyRHSOX7vvh2Baoex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzkLmgCB0DUSJn5Stp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxmG3kAqEHo0rrkgbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzek2PLzGl-nfJSPRV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyV5yehq2tBZrmzQmp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]