Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If an LLM can do your work, it means it's been done a thousand times before.
If…
ytc_UgzBWliZb…
G
When you realize that AI is pretty much a juiced up version of Auto Correct, it …
ytc_UgzKxAM7j…
G
Things plugged in to a computer with a human controlling it. Everything it says…
ytc_Ugw4yRflB…
G
It analyzes art from artists without the artists consent. Art Station artists re…
ytc_UgyWRrcAF…
G
i can't get over how the human scenes like at 1:44 look super AI. is it? is it n…
ytc_Ugzl01csy…
G
@lboxeur62 oui mais qui contrôle l'IA ? Aucune technologie n'est neutre, c'est u…
ytr_UgzOPayqD…
G
The amount of AI slop, and the numbers of dumb people who think it's real....is …
ytc_Ugx-CcyWb…
G
I’m not really sure what the inconsistency is here. Seems like ChatGPT actually …
ytc_Ugw4-93bx…
Comment
Classic attempt at scaremongering. The current models of AI cannot create novel ideas. They only have the ability to search of all our history and use that to run computations. We could be anybody at chess if we had access and could use the data as fast as the machines.
youtube
AI Governance
2025-07-22T00:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwBmIud28l_qtSRqT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJ-0xOEbWLoLhSZFV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzL9HzXDKb9ha0i6Rl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgztrpcDLVlnizzWtht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7vzxbFJBL1M3BlBl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx6M_WEJeFM6BYGLfh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdfryuQSOLLmxPnfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgznzNf6vSFBqlh3F4R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFPPFg-dMfhv7Z8cd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwoE1RjmODEQJwnjZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]