Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>HR, managers, and recruiters act untouchable
Which is going to be interesti…
rdc_o4htw7j
G
Of AI succeeds in the way Ai companies want, it will collapse the philosophical …
ytc_UgyXhIz_B…
G
WITH AI there will be NO JOBS left for Republican or Democratic Americans ....Th…
ytc_UgwRtlh8b…
G
This is what I do, work in pathology in the labs specifically with human tissue …
ytr_UgxQXpc_J…
G
Russia loves the AI weapons. Putin even said the nation that leads in AI ‘will b…
rdc_dwuzidq
G
Half of what Hao says is ok I guess. The other half is complete and utter BS.…
ytc_UgyTLCyzG…
G
”to do maximum damage” Now thats one way to get my attention. We’re going to war…
ytc_UgyRcx7FO…
G
Theoretical foundation of modern computer algebra arise roughly in 1970s to 1990…
rdc_nor8k2o
Comment
If you don't watch where you're driving, you'll cause an accident. What are you doing playing with your phone while driving, anyway? If someone deliberately neglects their responsibility as a driver, it is not fair to shift the blame onto the system. You should not expect Enhanced Autopilot to always stop your vehicle in the event of potential danger. It is a supportive tool, not a safety guarantee. You remain the driver and are ultimately responsible. Whether it's a Tesla, Volvo, BMW, or Ford—as long as the system is Level 2 (such as Enhanced Autopilot), you shouldn't expect it to autonomously recognize danger and always stop. You remain responsible for monitoring and intervening. Because Tesla's manual explicitly states that the driver must remain alert and intervene, this supports the argument that a driver who is distracted (e.g., looking at their phone) bears primary responsibility. At the same time, the manual also shows that Tesla recognizes that the system has limitations — meaning that the system does not automatically cover all risks. It is illogical and journalistically inaccurate to use an accident involving Enhanced Autopilot to label Full Self-Driving as dangerous, and it is unfair to single out Tesla as “the dangerous one” when other brands use similar systems with the same limitations.
youtube
AI Harm Incident
2025-11-01T10:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwu4rHzVpWCrLX-PDx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7O0cKCI7CBztphUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwBIz3LLF1v3ot_cx54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLtdt4jgYA00Lpfr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7OWanwO7ttJ-l1wJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxKT6NMx2d-xvzoKiJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKaEF8mBH_TNOXPFJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyzyXIx-HILl2sh6ip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvDSlxL2FHVLLctiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwmFboZijQ43Q3MEeJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]