Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI artists are not artists. They can do Google well, that's it. They're unskille…
ytc_Ugy22UGG0…
G
Oh look, a wanker who decides to defend AI because they cannot create art. Remov…
ytr_UgztNUPgg…
G
If this is true then why are all these AI channels still pumping out 5, 10, 15, …
ytc_UgxWQxSgO…
G
@SteelManEXEJ316 I'm assuming you're going to end up creating something just lik…
ytr_UgxfVXNJ5…
G
ChatGPT starting to sound more like a sleezy politician the more you probe its a…
ytc_UgyjELZTp…
G
The one robot: here's a box and ph I miss
The two robot: huh U PIECE OF SH*T WHY…
ytr_Ugxbc3cjV…
G
99% job loss by 2045, 50% by 2030. That’s what the experts are saying. During t…
ytc_UgzmDSWGn…
G
If the ai is given great enough control and happens to understand morals, ethics…
ytc_UgzxGV09V…
Comment
I'm a programmer and it become very visible when asking the AIs i have had interactions with for programming related questions that AIs both make mistakes, some really faulty errors.
One cannot just trust AI as a default reaction.
AI can be really good in some ways as of today, really good at pointing to places in existing documentation for particular info.
AI will probably become better.
youtube
AI Harm Incident
2026-02-20T06:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwpcQCZRzTR_hf7zid4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxU4i7YhI3pjQgjuUR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy7IeUePBR2klJeNil4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx_cXwWEmQ7wGt1fH94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzu3PP8LaAMAF-IVfR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwbcwejp09Od8ubUM54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyrio-vh1RKzzjzEy94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxEsjF9nIMKhS3dTjN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugyuvk002-tm4kj3y_t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgztKqlE5n2-XWpFZsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]