Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon is not the AI expert but he is the one who warned against AI way before the…
ytc_UgwW5nm1o…
G
Honestly, the future of work that Sanders is describing sounds pretty awesome, l…
ytc_UgztztFwj…
G
Sadly I believe that AI will disrupt a lot, people will take time to change, if …
ytc_UgwejUIPj…
G
Where is this lady from? I know that in Irland the letter 'H' is pronounced "h…
ytc_UgynpnGBo…
G
unless we wind up in a nuclear war, a pandemic much worse than the last, or the …
ytr_Ugz1Jg-vo…
G
I think Sam and OpenAI should pull their bootstraps and stop relying on these go…
rdc_lp7gdfl
G
Saying ai art is art is like ordering a pizza that only has ketchup and cheese a…
ytc_UgwBwaEkh…
G
We are lucky that humans now can die with our spirits and consciousness because …
ytc_UgxmQbK-E…
Comment
It is a fascinating dissonance to hear a 50-year veteran of AI express shock that his field is actually succeeding. It recalls the phenomenon of Nobel laureates who drift into incoherence later in life (like Montagnier on water memory) Did he really not believe his own research would eventually work? Sci-fi authors identified the alignment problem decades ago; it shouldn't have taken a Berkeley professor until 2013 to have this 'epiphany.'
More critically, the 'catastrophe' narrative betrays a massive status quo bias. Russell worries about the loss of human purpose, yet explicitly admits that for many, this purpose currently consists of 'repetitive work in windowless boxes'. For the billions already living in economic hell (facing poverty, hunger, and meaningless drudgery) the disruption of this system isn't an existential risk; it's a necessity. You can only fear the end of the world if the current world is actually working for you.
youtube
AI Governance
2025-12-04T20:0…
♥ 42
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzh-YnBzznNZ1qWyQ14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwRQFKmdG19FO8-AD94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyW-LL40QAOwv9pVQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgI5BFRrLZ996_qmh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPhQDdOKhYQvLluoN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwv4OkyAKs3UvA-eaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz4AjPalg3rFU87gH14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzncXg6mJHXTVaCS414AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFxasyzxX1MBY0FJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx9dnj66M0hUfEUgrN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]