Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The very fact that the “ai” had the audacity knowing he was about take his own l…
ytc_UgyV0_Vk6…
G
The number of comments on here about why companies aren’t allowing AI tooling is…
rdc_l588f4n
G
All these techguys talk about The Super Intelligence because they want you to ta…
ytc_UgyBcMzpQ…
G
Totally agree. AI should augment humanity, not replace it. Creative types and hu…
ytc_UgwWDD1WE…
G
We appreciate your thoughts on the robot in the video. It's interesting to see h…
ytr_UgwyFtLmX…
G
If AI maintain the ethics and roles concerned with humanity then it will be succ…
ytc_Ugx-Nht1H…
G
Imagine life as a grand simulation that began with a divine creator forming huma…
ytc_UgxT2QLDy…
G
Is any danger to chat with ai in whats up always. As for any help in daily life.…
ytc_Ugx8iE0TG…
Comment
FSD, at least for the last 12 to 18 months, when properly supervised by a competent driver is many times safer than most human drivers. Period.
The unsafe operation of any car can result in injury and death. Period.
With the advent of version 14.2.1 of FSD around Thanksgiving, it seems to me that FSD alone, even without supervision, drives like a patient, courteous, confident and attentive human driver, probably safer than most human drivers even were it not being supervised.
Anything you are paid with which to pad your shyster wallet that delays Tesla bringing more of this quality of autonomy and safety to American roads is blood money.
And btw, is that the Florida case where a fellow was digging around on the floorboard to get the phone he dropped, while he had Autosteer (which was not FSD) holding his car in the lane and a pedestrian was killed?
He was not properly operating the vehicle, of course bad things happened. It was sad. Tragic. Human.
It is with the hope of eliminating such human error on our roads that Tesla and other companies are attempting develop and deploy autonomous driving.
youtube
AI Harm Incident
2025-12-12T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyhSztiv_TZd8uEF-54AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzBoxYsnuaGB591nnJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy6-JWMgh2ppA5iRkp4AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx17gZzW9FDASpf7bJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxnfmUUKxNGe89WeWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzl60eMyvccDhvnZ0J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgydO5vV4t_8xi3GQ-B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzqkAHOZVQtHRWETu14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZ9gg-G7XknLeq6iF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzCkxR1f3mz0QAMItN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]