Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anyone else shocked that the AI model in the intro is white? It's been a while s…
ytc_Ugwr9FV5y…
G
@wolfsbanehorde yeah idk what the hell you're talking about, it has nothing to …
ytr_Ugyjwwy0m…
G
anybody who has half a brain knows why you people are like this.
it's all based…
ytc_UgyAm4F2L…
G
As an small artist who wants to pursue art as business and a hobby. This is some…
ytc_UgxKExLnu…
G
I know AI image generation "I won't call it art here" is painful for traditional…
ytc_UgzvmTMIo…
G
FYI - we are ACTIVELY TRAINING & TEACHING AI
The program is learning from you…
ytc_UgzHXax5j…
G
I think AI art is great to come up with concepts but where will this go? Books? …
ytc_UgxnMMlkh…
G
I think we are just behind the peak. Many companies have already tested AI and n…
ytc_UgxN_LLmx…
Comment
I like how the chatbot wasn't wrong. Turns out Artificial Intelligence can't beat Natural Stupidity.
ChatGPT is a glorified autocomplete. Great for correlating info, but prone to literal human error since the dataset is based on human text (shocker) and to statistical anomalies or context drift hallucination. Especially if you autistically reinforce your view onto it deliberately.
youtube
AI Harm Incident
2025-11-26T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzg3ELcfBrRsXi6FDx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzL8giCwwvGwm8KX494AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzUas6MozUJZl2LGJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzeyTA7qNfMS3BuGS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw7ixg8BpzUsc7ETcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuZ8VhVrtsPgEWjuZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxckoosUf03lWh12Mp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVaCIpGLN6RdIiK3Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPaLYcsS2DJlossCt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwuuepNbr8R0XkWKo94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]