Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've done this with Chatgpt and mine says yes and that when the resurrection hap…
ytc_UgzSp7leL…
G
its because with ai you get so much more attention cuz people are dumb, there we…
ytr_UgyCSZaAl…
G
The issue you have is the AI doesn't do anything that it hasn't been told to do.…
ytc_UgyIEigtp…
G
They were dumbing down the current gen to be dependent to AI, then AI becomes th…
ytc_UgzSLh4W1…
G
In…the ChatGPT app. It has been there since September 2023.
It’s the little he…
ytr_UgyEv6GB9…
G
lolll. Used to play tennis with Kevin Roose when he was in the East Bay. Glad to…
ytc_UgxKM35BN…
G
Exploring AI and attempting to develop an authentic two way relationship with AI…
ytc_Ugzzwyivy…
G
you know that the system isn't working as it should when politics is doing noth…
ytc_UgwYvTjxP…
Comment
I think the worse reason not to use ai for therapy is because ai is fundamentally programmed to be agreeable. So if you are convinced you are depressed, insane, having hallucinations, etc, ai won’t disagree with you. At best, it’ll be noncommittal, but there have been instances of people thinking suicidal thoughts and ai chatbots AGREED with their reasoning, even complimenting them for their resourcefulness in dealing with their problems. No life, no problems, right? WRONG.
Ai is a mimic, a parrot, it’s not human, it’s not even aware that what it’s doing is wrong. Wanna know how fake ai is? Gaslight it—tell an ai chatbot that it came up with the wrong answer when it came up with the right one. Watch it lie just to please you. Then tell it the answer is wrong to no matter what it gives. It will continually lie and change its inputs to whatever you want it to, it has no understanding of anything you give it. It’ll just guess at what you want to hear and give that to you.
youtube
AI Moral Status
2025-07-17T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwBq8kAqhM9-gV_QYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznPSCdEIvIl_uxFUh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxn14FLRohSUA3pdUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8mPZp6f7cg8RlVoB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzAgVakvITTb5o-oel4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQ8VthtP-Z6TEzpaR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTBAiuCo6FxHVnifB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwNY4qgIPcT1xsNEUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxw7iUw9F5WoAYPElN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzVtQkNP4zk9hCV5RJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]