Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
iRobot and Terminator and other robot films were telling us the future from year…
ytc_UgzWKOKIw…
G
To be fair, instructing an AI to ignore its filter and say something it is not s…
ytc_Ugyudbt_V…
G
Compared to 'these people' the AI they're playing with MAY be relatively sentien…
ytc_UgyP6P1FU…
G
DIVINE TRUTH WORD DELIVERED EARLY 1ST TRANSLATION!!!
Our CREATOR (GOD SPIRIT- (…
ytc_UgxcXxvk4…
G
Out of spite I ignore the AI overview google tries to shove in my face anytime I…
ytc_UgxSxoil4…
G
Having people overseas to correct the driverless taxis is such a stupid idea.
T…
ytc_UgzPzK7pA…
G
We should like, ban AI? Its straight up a dangerus Tool made with no regard what…
ytc_Ugxpg97V9…
G
Don’t worry about Ai. Worry about Sam Altman a human. Watch his train wreck of …
ytc_UgzXI4OQ9…
Comment
Luddite chiming in:
AIs today are getting really good. We're talking about getting something like 97-98% of decisions right. However, getting 2 to 3 decisions wrong out of 100 is not safe driving.
Which is fine, if there's a competent driver in the vehicle.
But if the self-driving cars take over, the next generations won't learn to drive. They'll be taught the principles but will never have to apply them.
So we'll wind up with cars that occasionally do something really stupid and drivers in them that won't know how to handle it.
reddit
AI Harm Incident
1546355358.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_ed0ovkg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ecyujhf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_ecytyww","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_ed0p4o0","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_ed0j4h8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"})