Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am generally on your side here. But to be honest it feels as you didnt adress …
ytc_Ugx8MY4Zc…
G
Anyone who has actually worked with AI can tell you it's far from what people pa…
ytc_Ugy4CvFYn…
G
I might have just shared less than 10 pcs of my artwork but the time i put for e…
ytc_UgwgaGToG…
G
Crazy I’m alive to see a real robot shooting 😮 we don’t learn nothing from the m…
ytc_UgyZrZzV4…
G
AI doesn't have a real logic, it is obviously a simple answer not regarding the …
ytc_UgzVAGlD1…
G
They could ban algorithmic suggestions and force users to search for a specific …
rdc_nkq0h1r
G
Espero ansiosamente que le den una aplicacion altruista y verdaderamente útil a …
ytc_UgxKbgTbp…
G
Excellent!! I hate opening a story and hearing AI voice. AI just isn't as funny…
ytc_UgytQZ9fN…
Comment
55:30 human beings are notoriously good at contending with speculative probabilities after all. I am in the field and I think there's at least 99.9999% chance it goes horribly. In part fundamentally but also just because of our lack of human systems for contending with this kind of problem. There are things we could do - but we won't. So the worst outcome is almost guaranteed. The problem with AI is that it's not a thing where you can go, oh shoot, we gotta fix that. No. If you notice it has gone wrong, it is too late, by definition, because it means the damn thing doesn't care anymore if you know. If it goes wrong, just once, ever, it's over. For everyone, forever.
youtube
AI Moral Status
2026-01-09T00:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXvP06xB_rvHXU8nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxB2lUMC10V2WCKMdh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzG6m5nNk-ZQp4yPdd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdLgUpm0zqRww_36x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXKB0Q9EOyb0TYAQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9jWegCqJ5MLH9GXF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjHKweqa7s6ZC0JHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-KZ4-7G2BKQOny894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy88yz9_C5B-z5vALJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-G5YAEcxVcUZLiZt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]