Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the worse reason not to use ai for therapy is because ai is fundamentally programmed to be agreeable. So if you are convinced you are depressed, insane, having hallucinations, etc, ai won’t disagree with you. At best, it’ll be noncommittal, but there have been instances of people thinking suicidal thoughts and ai chatbots AGREED with their reasoning, even complimenting them for their resourcefulness in dealing with their problems. No life, no problems, right? WRONG. Ai is a mimic, a parrot, it’s not human, it’s not even aware that what it’s doing is wrong. Wanna know how fake ai is? Gaslight it—tell an ai chatbot that it came up with the wrong answer when it came up with the right one. Watch it lie just to please you. Then tell it the answer is wrong to no matter what it gives. It will continually lie and change its inputs to whatever you want it to, it has no understanding of anything you give it. It’ll just guess at what you want to hear and give that to you.
youtube AI Moral Status 2025-07-17T03:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwBq8kAqhM9-gV_QYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznPSCdEIvIl_uxFUh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxn14FLRohSUA3pdUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw8mPZp6f7cg8RlVoB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzAgVakvITTb5o-oel4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQ8VthtP-Z6TEzpaR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwTBAiuCo6FxHVnifB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgwNY4qgIPcT1xsNEUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxw7iUw9F5WoAYPElN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzVtQkNP4zk9hCV5RJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]