Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The big issue I have with these types of conversations, is that you are ascribing human goals, instincts, emotions and, most critically of all, the ability to freely choose what you might calls one's "core directive" in a system we are creating. All that, as far as I can tell, requires incredibly deliberate effort. It doesn't matter how intelligent a machine appears to be, if in the final analysis it is only ever REacting, doing whatever you tell it to do, then it's no more a threat than any other information repository. ChatGPT may look intelligent, and in terms of design, it is, but all it really is is a glorified google search, which itself, is a glorified library index. ChatGPT does not think for itself any more than a google search does. If you left a it alone for 100 years, it would not react in any way and would be just as functional when you talked to it again. A human under the same circumstances, would get nervous and wonder where everyone event. After 100 years of solitude, such a person may not even remember how to speak, much less remain perfectly functional. Why the difference? Because we have 3 billion years of programmed evolution behind us, telling us to self-preserve. We expect chaos, we expect fights. When things are too good, we make a mess of things just so there is something interesting happening. To self-preserve, and to fight with nature, that is, broadly speaking, our species core directive. AIs do not conflict with this core directive because they have no core directive. They cannot think for themselves. They are boxes that, if poked, will poke back, but if left alone, will effectively cease to function. Being inherently reactionary limits what they can do even in theory. I appreciate that I could be wrong, but even IF it’s even possible to get a machine to act on its own, (and that’s a big IF), who TF on Earth is stupid enough to tell an AI to act on its own anyway?
youtube AI Moral Status 2025-04-26T19:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzJPsbZUgnZTCsGjsZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOe4ZURwiEyf4MpL94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfcJzuijugyHuC3Bh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxyppPb4dtr5SRP-854AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRC2yQxV1y5ISEWmJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzbXjsRkbBLgps3MtN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqvP_89QFiSZeh0NN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxgXhiH1lazqWDAxjl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy7cpo-6OMBJG0Nyo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzE5LfWGRo6l0wBBgR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"} ]