Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I had to write an essay for a class and im really good at spelling and stuff so …
ytc_UgxDQD8mo…
G
@CodexPermutatio Of course I know she is actually an expert but her blinkered vi…
ytr_Ugxsrxsao…
G
saying that people who can draw are "born with a gift" sounds disrespectful to m…
ytc_UgxgaFj_F…
G
AI HAS NO FUTURE 🐦⬛🏦 is simply memory🎮 it is not intelligence. Ai cannot🎰 answe…
ytc_Ugx2nMGlD…
G
I'm here to end illusions, curses, spells, black magic, enchantments. Everything…
ytc_UgwktzPMq…
G
There’ll be more stories like this and you’ll see the government use them to imp…
rdc_o6k732a
G
Any actual artist can apply themselves to any medium.
All Ai prompters can apply…
ytc_Ugw7uj7BH…
G
People that use AI are not Artists as they do not create or have any control ove…
ytc_UgyimgE5h…
Comment
I think it's worth pointing out none of these AIs "know" what they're doing, because LLMs don't work that way. When they are acting maliciously they're simply calculating an output based on their training data and recent prompts within its memory, which get decoded as messages, which then have to be parsed by a program to do something (eg generate a picture, google something). The real threat of LLMs is much more mundane than hyperintelligences that decided humanity has to be destroyed through elaborate schemes, and more people anthropomorphising or deifying LLMs, much like someone might be radicalized by propaganda, or believing an LLM is far more competent and less volatile than it currently is, and giving it permissions that it shouldn't have. In that sense AI is dangerous the same way morphine is dangerous, with the danger being reckless or malicious use by humans rather than the AI per se.
youtube
AI Governance
2025-09-24T13:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwUt6RReY_9bkL9uw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugztx5osCwZvJ20WMvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzUcgF6fgHCor11mEN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyYS9abGPH8lcirJT54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzz3XCvMEXF0uy9tRV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]