Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
bought this book after my manager suggested it. I’m a project analyst trying to …
ytc_UgxMQd_KO…
G
the AI is a lie and man can not trust himself to destroy himself and the world, …
ytc_UgwF-19EE…
G
I have worked in customer service and tech support honestly it's not just speaki…
ytr_UgzR6HG42…
G
This post inspired me to ask ChatGPT a very simple prompt:
>**Me:** Write a…
rdc_jvnsooc
G
So this is how the ending begins. With people treating AI like a lesser being an…
ytc_UgxgT5eqx…
G
As a non-computer programmer, the part that resonates with me is the "AI slop" l…
ytc_Ugz7ifSR2…
G
@SoloG19995well, smart people actually create ai related stuff, engineering but…
ytr_UgxM-EiYu…
G
If this gains enough traction the states will start banning the banning of facia…
rdc_eudgpzf
Comment
This whole fretting thing is so stupid. The only thing that matters for AI driving is, does it SAVE MORE LIVES THAN IT KILLS? No driving system will ever be perfect. It just needs to be better than the average human driver.
With a level 1 or 2 system like Tesla, it’s even better because you combine the features of both AI and Human safety systems. The AI is never distracted, and the Human is supervising. The combination only is unsafe when the human becomes distracted, which is 100% the fault of the human.
youtube
AI Harm Incident
2025-01-03T19:4…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzaEYvJE2SSYW9L2Kl4AaABAg","responsibility":"manufacturer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyzFpnNOkVcKmH_aaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwzp58DTN1IBX_8ERB4AaABAg","responsibility":"manufacturer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxGdm19dqdWYCXBTRh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgywLlKfnvOanu3C2f54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4HRAiHIiHg30CIKB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwHooEFPaqbXC2ePTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqXpcLVaK5rSykubZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgztIu2__P6nFjw0cbp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNGo6TFyukcaQc9mR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]