Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Pithy, but "real danger" is pretty reductive. It's the sort of average-human-lev…
rdc_ohxxga6
G
Are you dumb? The robot was made to place small boxes not a full grown human…
ytr_UgynOmNaj…
G
Have you seen the film Elysium? That’s where we’re headed. All the ultra rich li…
ytr_Ugxvc8463…
G
AI will never fully get acquainted with emotional intelligence. It only IMITATES…
ytc_UgxMNZg5u…
G
Yes, in a sense, generative AI can be seen as a form of creativity since it has …
ytc_UgwSc_749…
G
AI propaganda 😂
It’s already past the point where i trust what it produces and …
ytc_UgyvpAC4A…
G
"iss okay, we reprace yuu wit da robot, it duu a better job, no bafroom breaks. …
ytc_Ugxx3XmYH…
G
If everyone gets fired cause they get replaced with AI, who is left to buy shit …
ytc_Ugzsg51Ox…
Comment
1:07:22 If the AI team considers the patient's profile and determines they might be a threat in the long term, would it be possible for it to misdiagnose? Example: a young person who might become a politician one day doesn't get the best treatment and never pushes for legislation that the AI wants to prevent. It could be highly targeted at people who we would not even think of as important at that point in history, for example. AI misdiagnoses 16-year-old MLK Jr.'s chest cold which turns out to be double pneumonia for example.
youtube
AI Moral Status
2026-03-03T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw2lC8T6rEQGjgwLPR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0_RMtm6G_eREqEQd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzCdladb4_DlqJeoaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzmXqhiLm3KbEpR7iJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwW5Xp5tFx1LiOq1BF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCh3wo_i8GmcynC_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxQ57i3d5w2WCyrBF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxARRFJPpZ-3LsS07N4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSEBFDFflcbMi7MP14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyP_cNNdkGIRF-Tv4d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]