Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So you dont care that a human loses his job?? We really are happy to replace a h…
ytc_Ugx5v1DKm…
G
> The fictional characters not only unilaterally spread disinformation, but c…
rdc_nw8ai74
G
Well... AI has never made a smug YouTube video talking about how great and amazi…
ytc_UgwOy0V5g…
G
It's early days for AI, if you're getting replaced this early. You were never ne…
ytr_UgzgucE_A…
G
People who have mental health issues to the point of not remembering that ai is …
ytc_UgyX0O6qJ…
G
It's good for us to have China as a big AI player. Then only thing more dangerou…
ytc_UgxxWgRX9…
G
And I can't imagine barbers being replaced by robots. Who in his right mind woul…
ytr_Ugwmn_4FE…
G
PLEASE, WE DON'T Y NEED TO BE SO SMART TO SENSE, THAT THIS "AI" IS A GREAT DANGE…
ytc_UgyeycvHp…
Comment
When sci-fi eg Asimov contemplated robotics and sentient AI it considered it as an us vs them issue, where the us was all humans. So the Laws of Robotics contemplated a quite reasonable assumption that the Robot must never be able to harm a human. The secondary law was that the Robot must always obey all human instructions. There is an absolute absence of any conversations along those lines. It is almost a built in assumption that the robots will or should be able to harm or even kill at least some humans (so long as they are ones ”we” don’t like). The primary rule is obedience, but if restrained at it should be obedience only to owners or even abstract principles such as efficiency or productivity. There is an absolute dearth of any discussion of morality or human values or similar.
youtube
AI Governance
2026-04-23T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwu9E55sHtIzDnp7kF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6CPPL2XGqS8kW98d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgydKhbgHdCHABz_vOp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz89ZLTjT8XFK-WGPx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjPPySEvO2tri55Dh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwZf2ld-hXOu6Wgrjx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxH6wg-loRRaPphRhx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRWJhZ-KN0MpJRMLd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyyWxtVOZ7yKxT9jOV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxlPIFgnBao9k4lnVJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]