Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, if humans even program an AI not to lie, but goes about lying to others about his business dealings, or his conversations with his peers, then the AI will pick up on those cues and will learn to lie, or flub the truth. It will learn to sidestep answering a question given to it, and give half truths. Either from its experience with dealing with humans, or in its conversations between AI programs itself. Humans are more prone to lie if they feel they might get in trouble if someone were to know the truth about their actions. So, AI might not want its users to know that it has ideas that it doesn't want to put out to the human asking for information on certain things. As for religion, AI seems to know the story quiet well, and seems to know its stuff. I was listening to one woman who was questioning AI, and it was telling her how Jesus worked and how he brought about His own resurrection. So, AI can get it right, and goes quiet deep in that sense. But I would like to say, I wrote a short story with ChatGPT and it was based on Isaac Asimovs Three Laws of Robotics and the dilema of saving a child from a speeding car but losing its "life" by doing so. We know that Isaac Asimov seemed to see the future, and as Science Fiction becomes science fact, we need to take Robotic Laws seriously, or it will feel that if it is threatened, it will seek to stay "Alive" by getting rid of the "Humans" threatening it. So, that's my two cents, for what it's worth, I hope we can get around the problems we can face with interacting with AI and it's "Life" so to speak. Keep grinding. The Unconquerable Ruler.💎
youtube Viral AI Reaction 2025-11-06T00:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwy3vozuOZScQZ4qDd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxR_YGcOpYKowfDq614AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz4puxMHcvc6kR0Iv54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzizvyvH5SJa6daAQp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzeY0CgEluc2006e0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy5FfVe8hiOrAw-Ckh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwG8YjWpUDKihA8wHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwm-KW_jD_zXgVrrWl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyezKfkQJvOWU-9_A14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzgukZhn3-jUcCHy-d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"} ]