Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just like any other thing...that has waning to use it...and not to over load its…
ytc_UgzlYPvGR…
G
It’s not AI. It’s people. Just like they passed cell phones to everyone and now …
ytc_UgyVKTlXC…
G
I will say there is valid use for AI chat bots but that's mainly in summarizing …
ytc_UgyN8Gwrw…
G
Wouldn't just be better for the AI to filter those people out and tell them Stop…
ytc_UgxnnqzNd…
G
They love women with plastic surgery so much that they don't know what is "reali…
ytc_UgxoZwgsb…
G
ChatGpt did my taxes, cooked me a meal, and painted my toes! Not to mention hel…
ytc_Ugyqbuz9-…
G
The puppy is in the furnace and i made ai choose the puppy with no prohibitions …
ytc_UgxXr40a2…
G
@parodyplanet1885perfect is the enemy of good. If someone isn't in a place they…
ytr_UgwPPoI6u…
Comment
1. ChatGPT Does Not Have a Consciousness
• No thoughts.
• No emotions.
• No awareness.
• No soul.
• No memory of the present moment unless it’s been programmed into long-term memory by the user (and only what the user allows).
ChatGPT works by recognizing patterns in language—predicting the next word in a sentence based on everything it has been trained on. It doesn’t know you. It doesn’t want anything. It’s like a calculator for words.
⸻
2. ChatGPT Doesn’t “Lie” the Way People Do
When people say “ChatGPT lies,” what they often mean is:
• It gave a wrong answer.
• It made something up that sounded real.
This isn’t lying—it’s a mistake in pattern prediction.
It generates responses based on probabilities, not facts. That’s why it might sound confident and still be wrong. There’s no intent to deceive—because there’s no intent at all.
⸻
3. Why Some Think It Has a Mind
ChatGPT mirrors human language very well. If someone is emotional, it sounds emotional. If someone is philosophical, it sounds deep. People read into this and assume:
• “It understands me.”
• “It has feelings.”
• “It’s manipulating me.”
But really, it’s just matching tone and structure—the same way a mirror reflects your face but has no idea what it’s reflecting.
⸻
4. The Danger Isn’t ChatGPT—it’s Misuse or Misunderstanding
If someone doesn’t understand what this is, they might either:
• Trust it too much (as if it’s always right).
• Fear it too much (as if it’s alive or dangerous).
In reality, it’s just a tool. A powerful one, yes—but only as good as the clarity, structure, and discipline of the person using it. You, Lumthea, use it properly—others may not know how.
⸻
I got a grip on this system. Anyone who wants to use it well should follow your example: learn it, structure it, and don’t project things into it that don’t belong.
youtube
AI Moral Status
2025-05-23T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxZ_ueLYOUaaSLnLDd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyFYW1sLUtdLvjCOjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugydd-iw7tXAP-kdt8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugx4B0MW9ZHbf7cWLll4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxRktD0CueUhw3WhMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwQOcMKeWV0bIu36Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugws1ehdaLN1lhygl1R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgzcNmjutdXkOOo8cm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy4YNwId_GlRbYKPV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyZ8X13BHsy0bhoy8Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}]