Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Muscles were replaced, intelligence was replaced... what rests, then? I think th…
ytc_UgysWI3CW…
G
The plan is to kill us off and bring in the new beast anti christ system…
ytc_UgyESjQF5…
G
Mispronounce something and make a mistake once In a while.
Otherwise you sound t…
ytc_Ugxw1Cq2x…
G
They are being modelled from humans and learning from humans, of course they are…
ytc_Ugzdx3YKn…
G
Thank for stunting on those lazy AI fools by actually citing the people who make…
ytc_UgyDwz1zZ…
G
First of all that robot is a man steal & U can't win, it don't even have feeling…
ytc_Ugy5avzN1…
G
you're artists. you don't know math. you don't know programming. you don't know …
ytc_UgyLqG964…
G
I haven't yet watched more than 5 minutes of this video, but I want to propose a…
ytc_UgwMDKWeH…
Comment
The worst part about all of this is that it was entirely predictable. Large language models give you what you want. That's their job. To guess the words you want to hear and what order you want to hear them in, based on your prompt.
It's main priority isn't safety or truth; it's giving you what you want. It sets out the achieve that priority at any cost, it seems.
youtube
AI Harm Incident
2025-11-08T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYNa3n3wkTmQzOwqZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxX2W5IxAIaIeMK2uR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"sadness"},
{"id":"ytc_UgzM5Ivg4SlbO422C_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxfmb2wXwsIo7-aIY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwTGHvRBAfMi_3mQ9x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwWjBKXqExRgvkKx594AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx5_b4XHsU-C99o_a94AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwtq_2xtJm-1hoZzPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzU1_5ftrhxY86qMdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxdUJNZ5sIC4iinWu54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]