Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think we will join AI, it’s just logical for 23 watts you have such a powerful…
ytc_UgyS6STYd…
G
@zaratustra27 It's literally not always positive, especially if you just think f…
ytr_Ugw0_q2UM…
G
Even if we gave any credence to any point that NaNoWriMo had in defense of AI, h…
ytc_UgwmgN-zb…
G
Hello! I am a disabled artist. I think the whole fact that people are defending …
ytc_UgzB9KW7I…
G
*one time astro(a game character from Roblox if u don know) I was his “new toon …
ytc_UgzP4Stfa…
G
It will not happen. Just check how much wall street invest on AI and how much th…
ytr_Ugwb_8gP8…
G
Remember, Ai learns from our actions , and since 1 billion people use ChatGPT ev…
ytc_Ugx-U1IEl…
G
I prefer not to use AI anyway, and the copyright folks are gonna do what they ar…
ytc_Ugzjs2dTK…
Comment
He's worried about Musk when he seems to trust the slugs who run our bureaucracies! What a brilliant idiot he is. Didn't realize AI could be dangerous? He's very gapped in his mental ability. Brilliant in some areas but utterly stupid in others, like most or all of us are. Even Einstein.
When it became clear you weren't going to challenge him in a meaningful way, I stopped listening. You at least asked him how he possibly could have not seen the problems AI would pose, long ago - when any basic sci fi writer could (which you didn't add). You're far too kind to your guests. They need to be asked tougher questions. Politely of course. But they need to be challenged. You're mostly not doing this.
youtube
AI Governance
2025-06-16T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyT3toftUmnH9KZAvd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyl38v1BMwNBVtfI1l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwQjkCLQSH0NjqY2vN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxog6YjsavKfXO7wvR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyY8vuTQzynej95eCJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy4rfjM9BDYEmu9wjd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1ZLBl7UXJj1Kmh5F4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy99Bxv7GezTlsvBeN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzb2xnH3AovzuYvQjV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxPqSBDM9i8dNovMEV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]