Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Your comment was removed for off-topic political discussion unrelated to ChatGPT…
rdc_oi2aa8s
G
You can ask ai to make art than you trace the ai art. (It's my opinion)…
ytc_UgwWa_erV…
G
I don't see the problem with developing AI. It will be a new life. Now, does thi…
ytc_UgwE1Slll…
G
I can remember a little bit after Stable Diffusion was Integrated in the AI Art…
ytc_UgyLlCFMw…
G
This was a great example of progress of public AI. Remember the fake Russian tan…
ytc_Ugx07U70n…
G
What creepy things come from the mind of men.
Just imagine in 40 to ∞ years wh…
ytc_UgzT7xg2p…
G
One concept I had was as most of these AI models run on hardware. If we coded a …
ytc_UgyeG_VWw…
G
I mean, wouldn’t you rather purchase a piece of art you know a HUMAN made and pu…
ytr_Ugxfhljwy…
Comment
About a year ago, I asked Gemini to construct a 20-question "fill-in-the-blank" quiz from a set of 100 words. Most questions were okay, but one was completely wrong. When I pointed this out to Gemini, it got petulant, like a child, and said it was "possible." Yes--but not in English grammar. Later, I asked Gemini about different topics. Each time, I "thanked" it for the answer and then proceeded to check some of the sources below, as if I didn't trust it. (Which I didn't; see above "quiz.") Not long after, Gemini started listing two or three sources to the right of the information it provided. It was learning!
However, Gemini recently said something that, like before, was plainly wrong. I looked up the source it provided and made this discovery: It had "misread" the source. The mistake it made is typical of phrasal dyslexia, where basic grammar is confused in blocks of text. A fellow I knew years ago had this difficulty: He confused dependent and independent clauses. This only became a problem when dealing with "if/then" constructions and hypothetical situations. Something that was possible but unproven he took to be proven and therefore normal. It led him to take up all kinds of crazy conspiracy theories. When I pointed out the errors and their cause, he began to attribute to me things that he had said and written!
If AI has no one to correct it, will it go down the same rabbit hole? And if it does find a teacher, will it develop the same psychological imbalance and retaliate?
youtube
AI Moral Status
2026-03-01T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwtzrc9QJ0JCUgpcaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxE8N52QQqPFzOqgql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwRLwLAWLcyt7TdCeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeZJwvr7tJckvxG1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzqwTp_XRxpDoUobFx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmQFWTCYzfkvEhrE94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuilUPCASowMygNE14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyO0jp1wCccCpEX8A54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy4z3X_hZfsJmjJS3Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6mMZaw1ZJplYLEfN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]