Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If our challenge is to teach AI to "preserve what we value" then we are well and…
ytc_Ugz17qux3…
G
@garryadventurestv9471 Just make sure you're on the robot's good side by offerin…
ytr_UgyoNesdA…
G
I somewhat disagree with this video. I can't wait for AGI, and for the exponenti…
ytc_UgyoB9uGF…
G
We want ai to do the basics so we can do what we love not the other way round…
ytc_UgwePf1nF…
G
We’re not a safe Democracy at the moment. The president is a malicious, narcissi…
ytc_UgzNsOf7V…
G
No. AI learns by survival of the fittest principles. Just like animals did. A…
ytr_Ugx76yQFa…
G
This video was made over 2 years ago. LLM's have gotten much better as of 2025.…
ytc_UgwzzmpfR…
G
i got a feeling you made up this aj to bash ai and wrapped this agenda in decade…
ytc_UgxD8FRJH…
Comment
When a shitty driver harms someone you can revoke their license and take them off the road, or when warranted put them in prison. You can't arrest a corporation (despite the allegation that they're "people") nor can you revoke the driver's license of a software.
Which leads to another question- if you or I make a mistake and get caught, we accrue points on our license, we pay for it not just in fines but with increased insurance premiums for years. What happens to the corporations running self driving cars? Maybe they catch a fine, which is virtually meaningless (a few hundred bucks sends a message to you or me, a few hundred bucks isn't even worth the time to mail a check for Google,) but they aren't suffering the same long term consequences that you or I are. Is it right that we can be held accountable and punished for making human mistakes, but a software that was programmed by humans, operated by a corporation composed of humans, has no equivalent penalty?
reddit
AI Harm Incident
1765249637.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt1uvg2","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_nsyms9i","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_nszt0pf","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"rdc_nt0khwl","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_nszu2t5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]