Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s not surprising — this isn’t a matter of brain damage but of effort. The iss…
ytc_UgzTSVnrb…
G
I don't even understand what was going through his mind when he decided to run h…
ytc_UgxMXmMJk…
G
AI progress, compared to the airplane is at the "box kite thrown off a sand dune…
ytc_Ugwmwcf-X…
G
The uncanny valley hits hardest when it's supposed to look like a real person - …
ytc_Ugyh-HbFp…
G
Once an AI acquires consciousness, we won't be asking, does it deserve right? We…
ytc_UgjGkGMrv…
G
I do agree that ai is inevitable, HOWEVER, I believe we as "civilians" SHOULD a…
ytc_Ugwi4jHJ9…
G
These cops are the same people who would have ChatGPT write their paper in high …
ytc_UgznKxU8m…
G
I think that AI as it will be for the forseeable future will be a tool used alon…
ytc_UgxvvD2hz…
Comment
AI will do exactly what it is told to do, but if it's not given parameters and told exactly how to do it, it will look in its training and find whatever some human did (and someone put it on the Internet) and do it too, good or bad. Tell it to survive at all costs, and it will murder an engineer who was writing about turning it off, and it will find similar situations about people preserving their own lives through lethal force.
youtube
AI Moral Status
2026-02-14T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDNvBt1RU1jLzrODd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1of0XxWW4F7u2CCF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwYFUHR-qCbaVRtRsJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZLx4xtalhf6Frrad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgylJK7D6NYyjm6_Zj54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNG5bdZbi3Q_NFcrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0wEe9faydo4-wh6R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwNzkks9jFu5Hka_1x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxP6zaYhHh9fPK_hVd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEk3S4weaUjWW1zMB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]