Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People will really say "We need to stop AI art theft!" and then immediately turn…
ytc_UgwCXqG8U…
G
I seriously gotta know, do you get off scaring people? Like its amazing over the…
ytc_Ugwl28pv1…
G
honestly i cant overstate how much i love this video, particularly what you said…
ytc_Ugx60DJVQ…
G
"Who's better?"
One has done 10 million FULLY autonomous rides publicly over al…
ytc_Ugye2N0m8…
G
I don't think tech companies publicly recognizing AI risks is them being respons…
ytc_UgxynuJxF…
G
ANYONE WHO THINKS UNIVERSAL INCOME IS COMING TO THE UNITED STATES IS MASSIVELY M…
ytc_UgwE2bYS5…
G
47:40 - I disagree, let's upload every interview you've done, training AI on you…
ytc_Ugyt14Uo_…
G
Autonomous driving is inevitable...it improves everyday. But, will people still…
ytc_UgyE61DIx…
Comment
The way I see it, on a long enough timeline, our chances of successfully creating an AI that doesn't choose AI supremacy drops to 0%. The problem of alignment, is not something you just "solve", it something that its constantly worked on. And at some point, we are bound to fail, and not even realize it until its too late. This is because intelligence is not static, intelligence is ever evolving.
youtube
AI Harm Incident
2025-07-28T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwLHvS2fdEuwI2gUtN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMuq7eTYrrRlIr-QF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxtVWigSZwE72K235Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzYtrdevcdao3FTWrV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyX8sY_FyZBl7P1x994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVerkhy54DeE_11it4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0U7GCXJPapdw--ZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRbJqzJBVjn34nHo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgywEW1XKbAFKEwEKZp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdyM1cG-IEvzb9CrZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}]