Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Too late! I know many of my colleagues will not do their jobs without AI helpin…
rdc_mbwmcs9
G
One thing a think AI slop is giving AI, is the ability to make more realistic re…
ytc_UgyI6fQ_t…
G
Where are the male robots.. Funny how all these robots always look like they cou…
ytc_UgxNvRUqc…
G
I think this guy is being paid to do this to increase acceptance and normalize r…
ytr_Ugz5KCpbD…
G
> The model, code-named Avocado, outperformed Meta’s previous A.I. model and …
rdc_oac1qzd
G
Look, AI is fine. And it hasn’t done anything wrong other than make art for peop…
ytc_UgxwGuGhZ…
G
Conveniently ignores the difference in the error _rate_ between humans and "AI" …
ytc_UgxStcDnh…
G
AI doesn't have consciousness, so it isn't "sentient". It can only ever become i…
ytc_Ugx9olgBv…
Comment
People are impressed by large language models. But language is the limitation, and similarly images are the limitation and existing code is the limitation. This is not the same as new thought, it's existing thought iterated on very quickly. And it IS often wrong and misses any nuance. It is disruptive and that is scary, but the idea it could ACTION these things is a little far fetched to me. In fact the scarier aspect is THIS, the belief that it could do this, and the instability that threat and misunderstanding causes. That is the space that is dangerous.
youtube
AI Governance
2025-08-02T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyRIRZbpybOVekMaUx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyK2hhZrQ2ql3FhJsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO18M8AIEUFpllNBl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxCzupU0SWvjplTQyB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_wF-X5rQR1AM9q6t4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzTMURUbX_RfLNusdN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFz0OEKW5zddee_sJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykkP_3bXpP1Gw7xLV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGwKVRK5hDxtjRvCx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzd8AIDo8cRZBqmcR94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}
]