Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, people were alreday able to delude themselves into thinking that the earth…
ytc_UgzxkKYPj…
G
he didn’t quite explain himself correctly. the best work i’ve ever done has been…
ytr_UgyGjfwdq…
G
/r/futurology is a default subreddit. People who make throwaways have them autom…
rdc_cmhdyyo
G
That thing about training AI takes the power of a small city, but running a huma…
ytc_UgwLNMQxS…
G
AI is not the solution, it is the problem. It's taking away jobs, causing comput…
ytc_UgzVvsxJf…
G
Don't be ridiculous, shooting and fishing skills won't help you a single bit if …
ytc_UgwQHMqcg…
G
As long as the AI does not get control of nuclear codes and starts creating robo…
ytc_UgxBJvYq8…
G
How can you feel pride in the final result when you didn't even make it? You can…
ytc_UgxisaiHY…
Comment
AI’s first kill? What are we even talking about—nothing’s been killed. This video is pure clickbait rot. The LLMs we have today aren’t autonomous agents, they’re glorified autocomplete engines. Calling them “AI” is like calling a calculator a warlord. So no, this isn’t a warning—it’s a steaming pile dressed up as prophecy. The only thing dying here is critical thinking. Are people seriously buying this apocalyptic fanfic? It’s fear porn for the algorithm addicted.
youtube
AI Harm Incident
2025-08-11T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyTiW8FBSuH8pUr-sZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjfDkovW7edE4635Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzVcyKiVGyxtT7sPf94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCO6TKTAFgxcdkkoR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugymz11Qrw9ZffK0g-J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5X36LHOnTAya5Yhh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFCS9nW4BNL9Vknqh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwTb6CkNj-uBou0aep4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwljDq1JxY0DvOpR4d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwQv8ddMh2tyWVppzF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]