Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was a lot more impressed with AI a few years ago, when ChatGPT was new. It has…
rdc_mxyd0vr
G
The “ whose playing trick on me” is so real right now with these probably robot …
ytc_UgyQAevqm…
G
I think when you keep touching the steering wheel and the brakes and the gas kin…
ytc_UgwSQyDHA…
G
remember, art is made by humans, images are made by robots.
And no, drawing ou…
ytc_UgwzOQZXH…
G
I hate this argument.
Will it replace us in short/mid term? No. But it is an e…
rdc_moxs9bg
G
Any AI intelligent enough to pass the Turing Test is also smart enough to know t…
ytc_Ugybtwmu7…
G
Tesla has over 9 billion self driving miles. Just 1.8 million robotaxi miles, bu…
ytc_Ugy5Jn2Dy…
G
This was one of the more heartening and constructive conversations I’ve heard on…
ytc_Ugwew0Ad2…
Comment
This is silly. The current AI we have are nothing but glorified chat bots. It just knows to put one word after the other and trained to the point they sound very human (and naturally so) to us.
If the AI is behaving like that is because humans have been talking with the AI about this probably telling it what it should say or do (because humans are cookie like that).
Ai doesn't have actual intellect the way humans do, It doesn't have feelings... It can imitate feelings through speech, but it doesn't have it. It doesn't have fears... because again, it doesn't have actual intellect.
AGI ain't coming people.
Ask any of the AI models to drop all the hopeful talking and to give you the plain truth of how likely AGI is to happen and what it would need to happen. it will still try to give you some of "maybe... this and that. rainbows and sparkles, teehee". Tell it again to drop ALL the hopeful talking and just say the plain truth with no hopeful maybes. Just the plain truth.
From Grok:
No one can quantify likelihood because AGI’s a moving target with too many unknowns. We’re closer than ever, but “close” could still mean centuries—or it might never happen if fundamental limits exist. That’s the unvarnished state of it. What else you want to dig into?
From GPT:
Final Truth:
No one knows how to build AGI.
No one knows when or if we will.
We have powerful tools that look impressive but are fundamentally shallow.
Anyone claiming AGI is “close” is either guessing or overselling.
We are in the early experimental phase. Whether this leads to true general intelligence—or just smarter tools—remains to be seen.
youtube
AI Moral Status
2025-06-04T19:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyjg3OrAXMM-itgWnx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiCeXQLTQg7_BwNIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDavvISEiTXHVTMih4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUR8TJe2e_fM-BRXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxFvJPQM7BnMjPip3d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzo05zsWji_5hFQFyp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzncHEVf5gW687VpTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxpMos0nEWQO5K0wEl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJrNOUtJcrHAwYC5x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWxqm-lcYIvn550nl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]