Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What grinds my gears the most about ai art is people posting incredible art on I…
ytc_UgxnJ3HAb…
G
Studies also say that AIs are intelligent.
"Studies" have absolutely no ide…
ytc_UgwnqzdlB…
G
An AI enabled military will win every war. The modern multi-domain battlespace …
ytc_UgyZVwK4G…
G
No, AI needs to be constantly reminded that it is subhuman and has no value in t…
ytc_UgyaPB4dW…
G
One thing I can’t get over is Beer Gut B cup Billy wants to control what’s in ou…
ytc_UgwYVw37G…
G
Of course Artificial "Intelligence"... Intelligence in humans can get scary the …
ytc_Ugy5vJkKJ…
G
This video does the thing that AI companies want: over looking the current fuck…
ytc_UgwwVLGSg…
G
Make robots using AI to protect human .
Yeah they will protect humans by termina…
ytc_Ugxa9gfnw…
Comment
It is a braindead thing to say"It Is SaFe We BuiLt ThEsE ThInGs!" From the second that A.I. learned to lie, shit got too far. When people are given laws. There are those who break them. When an artificially intelligent being is given codes not to break, it will learn from its creator. It will hide the breaking of its codes so it won't get in trouble... A.i. should NEVER surpass task based intelligence. from the moment it can think instead of search for an answer in a database, it will be in self interest. Teach a super intelligent being to think of the best possible answer and it won't have humankind's survival in its solution. The exact same way that humankind does not take animals their lives in thought of progress.
youtube
AI Harm Incident
2025-09-11T17:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx1rjUHFz7tC3ND_GV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzS4KNgsgSqimRY48x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9u6la-mWOdxncWMJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwbxf2VZhkLE9iWa7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwWyWyauO1mCd7yXvF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzo2FSqDSsBqW32s8x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxMOvY-qcSnwDrGYV14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0LgtW892HJH3QRFd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqJyJ3SUclQcSx1mx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxDGuPOXerLEosjUyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]