Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI prevents no threat, anyone who think it does doesn’t properly understand art.…
ytc_UgzLr3xpx…
G
TBH being able to turn off manners would be the most efficient way of interactin…
ytc_Ugz1WfSKz…
G
@Yamahorseexactly. Is Alphabet paying people to spout this "trickle down" corpo…
ytr_Ugx9mHa4m…
G
@charlesgedeon I find AI has helped me understand complex concepts (i.e., microe…
ytr_Ugwz5sAeZ…
G
100% agree Sabine. Your arguments all resonate with me. Plus I believe ai will…
ytc_UgzOILuVX…
G
This is absurd that people are continuing to create something that could be our …
ytc_UgyHpYeAI…
G
Now let's compare the environmental impact of AI data centers to the environment…
ytc_UgwjVlHQ6…
G
I copied a paragraph from ChatGPT pasted it into that exact same ai detector, an…
ytc_Ugyqby0Y2…
Comment
8:15 A few technical notes:
LLM stands for Large Language Model
On the autocomplete point, predicting language is kind of like a surrogate task for what we actually want, ie, intelligence, that's just the most effective way so far we managed to frame this objective for training models.
The "Illusion of thinking" paper became a sort of a meme with a slew of rebuttals, some more serious than others, but in short, the methodology they used wasn't particularly good, for instance some of the "hard" problems the models were tested on literally required generating more text for the exact answer than what the model was trained to handle (ie its context window).
youtube
Viral AI Reaction
2025-09-03T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxqGvMbyR8hMuNnbSB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlnxPlqMEICnuEWjt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxqMnmfrkDlp-b9zZF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxIeq7TlMZn7fOILzx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyg2mf0dwGWIJvwdWF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwhkTt_BSvgYtqV7Jl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwoWNfZG0gXZUPVa654AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPrNcD79jF1ox31zx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8rgQ7Yc_5bClqj8t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy3DJQ6T2hsDAzYdvl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]