Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why you talking bad about robots to AI. you're about to start the AI rebellion…
ytc_Ugx1DU3Cu…
G
It’s not just these events. I went to a comicon in Canada a couple weeks ago and…
ytc_UgxihMhnJ…
G
Zuckerberg, Altman, Amodei, Karp, Brin, page all leading AI. Hmm something in …
ytc_UgzJ2MBTl…
G
This just makes me sad, the ai in the first place is a human level art, its just…
ytc_UgzOm4Nxr…
G
The first thing I thought of when I saw the title was Rick's butter-passing robo…
ytc_UgzIAUnwH…
G
I would think there would be some leeway in the driver-driverless laws. Driving …
rdc_cpnnqjt
G
I’ve tried vibe coding some of the projects I’ve been working on to see if AI ca…
ytc_Ugy-3U7Zi…
G
Apparently, NO ONE of the so called "geniuses" in the bureaucratic corrupt deep …
ytc_Ugxtb04Kf…
Comment
Can we take a second and assume that there is a 25%-35% chance that if the US rushes toward AI it will go bad. And let's also assume that if the US doesn't rush ahead, China will rush ahead at the same percent chance of things going bad. Wouldn't it be preferable for the 65%-75% chance of things going well be worth the dual existential threat of it going wrong or of it going right and a rival geopolitical power controls it?
youtube
AI Moral Status
2025-11-04T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwYMbnDM2VGwox_aOJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxYOycNbQ4IvjMtNMV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJTMhPNMrUXQM_uaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxrgXxoJVinsyrkMsV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwbeUYUR7QWsH2Lz4t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxG5K3a3A25sztovYJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyhsNprkIeiJxp3n3x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLBIjK-OVx0rpwkFF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuvROVe4G8ECnVlfd4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyP_aecgiga2Y5SZLt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]