Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Oh and this week I was HORRIFIED to learn 1) There are several A.I.s across USA alone 2) each one is housed in several buildings 3) 1 building alone uses 5 MILLION gallons water PER DAY 4) That water is NOT recycled into it, not reused (some are but not all) and then they ask US to conserve. What the ...? We're being played. Anyway, WHY do we need a glorified Google search? For what? I tried writing a novel, its chapters were extremely short (yeah!) and then it says: "do you want me to write what happens to A, B, and C in the ___? I got ideas!" and then I say yeah, like 7 or more times, and it's a story that goes nowhere. No drama, no development, nothing. Like eating food without nourishment nor satiation. I point it out to it, it agrees and says it'll do better and.. rinse repeat: I got the same thing. It is incapable of coherently wirting a novel, or even a short story. So then I changed A.I.s and the 2nd AI, I gave it a prompt, there's a (job / male) describe some (adjective) action he does when he gets home. Make it (entered scene mood like dramatic, sci-fi, funny, etc.). I gave the same to both A.I.s The 2nd one gave me the EXACT SAME secondary CHARACTER to interact with mine, they did the same thing and expressed themselves in the exact same way. WT what?? Only the names were changed. It was basically the same scene. We been played! Expect a lot of cloned novels out in the market soon! LOL!
youtube AI Moral Status 2025-11-16T17:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgykZZ8_sMI9IaaKNoN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyg8gJXhMVruvVZKkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQ7p3HGgsbvxYrSdN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgziJvynvj7cmZbbqvB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyKvlG5mp5vc14qBTl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzELnktJ3-HaEgbkzl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyaYw9GQCmjdyqVBul4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy4F0rJ8Gf7EFLBo6l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxhcqUgmFnkMYYclRh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyCMEW3tkfWQ-5Pbx14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]