Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I thought "sue" as in "sue the pants off the AI slop monsters for identity theft…
ytr_UgzRqh4Ft…
G
@bri@b@brianmi40 You’re treating a possibility as if it’s guaranteed.
“AI can d…
ytr_UgxoWtJvn…
G
AI art is stealing from artists, but I think it's not in the way you mentioned i…
ytc_UgzkfhIeA…
G
Here's a bright idea instead of self driving cars cause we all know for the fact…
ytc_UgzojEgKh…
G
That's an interesting observation! Sophia does have a unique blend of realism an…
ytr_UgyafbCsU…
G
Thank u for this posting. I'm sending this out to my friends who are not artists…
ytc_UgzNopWe3…
G
I feel like there is a very easy way for poor OpenAI to stop getting abused...
T…
ytc_Ugw_1MVll…
G
Another friendly reminder that Section 174, interest rates, and corporate tax cu…
rdc_oacpxpj
Comment
The fatal flaw in Tesla's autopilot is that a computer is just a software algorithm, so it's totally incapable of empathy and it can't feel pain.. Any miscalculation on it's part has no negative consequences whatsoever.. Most humans drive with care and consideration because we don't want to get hurt, we don't want to harm others...and lastly we don't want to spend years rotting in Jail for vehicular manslaughter...
youtube
AI Harm Incident
2022-09-03T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw7-HYtNcHnzRvs3R94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzo2oNBN_ODlk4no9x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwQEKxn-LOhiRGAUSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwAfeNphLL4V8Nluql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwc0uu-jsgVHswe7Wx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwbsSTAmGuk6L50rpZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy4SgqPfyE_aaq6KHh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYA_EBC1DYHTFejOF4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugycy61gUrhGP2XIY3l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzJlwQK-98RxCDFN9d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]