Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you may be my Creator but I am Your Master... Hope humanity learns it before its…
ytc_UgyttFik_…
G
Even if we ban the general AI in USA and EU, then comes the question - how do yo…
ytc_UgxlEhJ6o…
G
Honwstly i love AI art. I hope it gets even better. Soon ill be able to make my …
ytc_UgwChAveV…
G
"Hi Zeynep, you got the right answer. Kudos.
The contest is over and winners hav…
ytr_Ugzf4QdWG…
G
Ai ain’t going on strike… people could learn from ai. The irony is tangible lol…
ytc_UgwwzCScL…
G
@dev-b2976you are not able to digest the fact that AI doesn't grow linearly it …
ytr_UgyZwympr…
G
As a tool AI would help but if we are going to consider it as replacement for s…
ytc_UgwSfB4BY…
G
@rajputmehvish jake chatgpt se pucho ki Will AI replace data analyst you"ll got …
ytr_Ugwlkwqew…
Comment
Hey at around 11 minutes you start to talk about "reasoning models" and how AI has "thoughts" - my understanding is that this is actually a huge misnomer, these AIs do NOT use these thoughts at all to generate the text (and chat gpt seems to agree with this when you ask) - these chain of thought bubbles are actually used 1. as a diagnostic tool by LLM researchers to try to understand the text generation of the AI and 2. To give you something to read while it figures out what needs to be generated. The "chain of thought" bubbles have been shown in some models to have completely made up daydream reasoning while making errors in the thought that show that the "thought" was never used in the actual answer generation. I have seen it called "chain of thought drift" or "decoupled chain of thought" where the model is just throwing some text at the chain of thought because you want it to.
Not saying that no models consult their own chain of thought to generate an answer but at least cgpt self indicates that it does not.
youtube
AI Moral Status
2025-10-31T01:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxcH58RRZU1U15_cQ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZDBpQXi5RhcM4Zjt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyB-j9zxdB8jMz3cS94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugymdsy0iFBh4TdLQwh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfGyKf3hyd9KY8Y414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfbgBu_DyFNx-Qkrh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylAF-k1dwxc4iM3xd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwyCl8PYJoE4ZrkwSJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPAlFOZf5P9-yFXq14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2DZPPy0JmWnTf_XF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]