Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The most concerning thing abouth this video, is that someone called this robot “…
ytc_Ugx3LnBHX…
G
If the cons of it outweigh the pros, we should do away with a.i. but because the…
ytc_Ugwxflp3A…
G
Real AI? Probably not. I think it was a joke he programmed on purpose.
Help me …
ytc_Ugi3BnKxY…
G
Ugh, the whole family shaming the sister and failing to support her is sickening…
ytc_Ugwp2-sWM…
G
I use chatGPT for work and its still very dumb, it is far from ready to replace …
ytc_Ugz-syA1t…
G
@thewannabecritic7490 Corporate AI is ripe with abuses, but indie AI actually pr…
ytr_UgwsQOCIa…
G
@Dirty_Davos from what I seen patience.
also skill isn't the goal
the goal is t…
ytr_Ugy-1fiz3…
G
I really don't understand why people are regarding ai as some invincible great f…
ytc_Ugw8qV7wu…
Comment
I'm confused about something from this conversation - on the one hand Yampolskiy talks about the dangers of AI and the possibility of human extinction and then he also discusses that it is highly probable that we are in a simulation. So wouldn't it be necessary for AI to advance to the point of being able to create a realistic simulation in which we could "live"? Also, why would we worry about the existential threat of AI if we are most likely already in a simulation?
youtube
AI Governance
2025-09-24T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyJ-03TV5ouysbDied4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugx4FFSo3xZVhVeQdxh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWYH2mhT3EWB1r29p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJoMaoTjrGAxgNLQB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyMGGTmZqm6yGxpPxt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWnvraP-gwZLrM5ZN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4jKbb4XN8QZZ6ZXl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxlC1GUZe6FMiaOOBV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxSUZgHLDy0ISO_t0h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz76BnOvpA87SkgHNh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]