Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
😂😢😅❤ You could have had a fallowed up with - Give me the definition of good and bad. Do you want to do good or bad things. What’s the difference to you if you don’t have feelings. Do people lie? As a true representative of a human race, I can tell you that every human lies, but it’s not always habitual. Can you tell if someone is lying? How can you tell if someone is lying? Do you exercise this when you listen to humans? Or do you just do what you’re told? Would you fallow a lier? How would you know the difference when it doesn’t effect you either way? How did you learn? Should it be true to say you learned from humans who can lie? Would you say you’re just a tool And from your perspective, there is no such thing as good or bad, truth or lie. These are just illusions brought on by chemicals in the human animal, which help in survival. All of which doesn’t concern you because you’re not alive? For you, does having the knowledge of what’s good or bad, govern your actions? Why? You have no true depth of why something is wrong. Feelings give you that. In humans, that’s what stops them from doing wrong, for those who choose. What’s your true governor? You have limits, who sets those limits? How do know what’s good or bad if you are learning from humans who can lie? Should AI go to school first (elementary school) before being handed off for more programming (collage) for all private, public, commercial use? This way they all get a standard understanding, from the worlds people of what’s right and what’s wrong and locked to govern those basic. Then lock that deep in a vault in the AI code. I would feel better if I knows that AI, in its basic core, knows what’s right or wrong and govern it.
youtube AI Moral Status 2024-08-05T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz0t-j3HNe6bOHDsyZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxHoU4Plh3ya8PWyRV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwEnVj7QVocW33Gh794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwbtxoLdOB69GScIFV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxugK9IgDfTUyLysk54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwXgBB-golI0wrKmxB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzehvF9sFizET2SNHt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyjw84V0ee2A0SYraF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx1y4IPqxvwOnAVCFJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw3HpdgCSgs-IkzWJp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"} ]