Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One time I heard a AI artist tell a real artist to stop drawing, and isn’t it ir…
ytc_UgzmtdcQy…
G
Look, I might not be a good artist, but AI is just... eugh. The mere idea of usi…
ytc_UgwuS8HSW…
G
Actually a good idea would be to use AI for ideas.
Just like most things.
Like…
ytc_UgyDgRqTh…
G
I really hope AI eliminates the news media quickly before they tear the country …
ytc_Ugzdug_ce…
G
Yes the chemistry involved changes dramatically when the processes are performed…
rdc_grrkq8j
G
It's not because of the robots. Nobody is getting offended on behalf of ai. Peop…
ytr_UgzKzrNKs…
G
Why tf do we allow these fools to hold a gun to the worlds head for the sake of …
ytc_UgwlB7oV_…
G
So my understanding is that historically while the AI may be able to diagnose on…
rdc_f1ek593
Comment
I really want to fight this, too, but I have no clue where to start. I don't think our society is ready for this kind of voice. As it grows more human sounding, the difficulty to differentiate realism versus whatever the AI feeds you is worrying (in more ways than I cover here as well).
People in my life talk to AI like it's a person. It's not a person. It is a computing tool. I find it very scary how easily people forget what they're speaking to. It is an algorithm that must be taken with a methodocial approach, not a behavioral one, like socially or emotionally. People don't understand this line and I believe it's because it looks and sounds too deceitfully realistic.
youtube
AI Harm Incident
2025-11-08T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYNa3n3wkTmQzOwqZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxX2W5IxAIaIeMK2uR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"},
{"id":"ytc_UgzM5Ivg4SlbO422C_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxfmb2wXwsIo7-aIY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwTGHvRBAfMi_3mQ9x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWjBKXqExRgvkKx594AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx5_b4XHsU-C99o_a94AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwtq_2xtJm-1hoZzPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzU1_5ftrhxY86qMdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxdUJNZ5sIC4iinWu54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]