Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Right,the only person in the world who should b able to control AI is Elon guys …
ytc_UgxxS2gpb…
G
Lets support Pause AI follow them spread this as much as we can
They need our he…
ytc_Ugy1qCkOD…
G
Come now, this video is just trying to plant the seeds of fear and paint the fut…
ytc_Ugw7D63ox…
G
If every industry were automated tomorrow, what would everyone do? Sometimes you…
ytc_UgyClCfxZ…
G
From my vantage point, companies do not want to train entry level developers so …
ytc_UgzPuAazx…
G
I've never been more grateful than now that I'm a welder in my 30s. I'll be in m…
ytc_Ugy0NC3ls…
G
This is terrifying I can't understand how someone can fall in love with an app o…
ytc_UgzUTE5N6…
G
13:34 What an embarrassing argument. What if Phil Hansen, a pointillist artist, …
ytc_Ugwk25r14…
Comment
Every time I see some AI-doomers, such as Hinton, or Yudkowski, being interviewed, I feel like screaming at the screen/podcast... Not because of the objective sci-fi bullshit fantasies they're spewing but rather because it's infuriating how every single time the interviewer is letting all the empty assertions just slide. Never pushing back or asking for clarification/specifics. Interviewers bear the responsibility of getting to the bottom of things when dealing with somewhat contentious topics. Yet, everyone is always failing, including host of this "show" here...
As long as you get views, you're willing to amplify almost anything, no matter how delusional or impossible, right? Just bring more half-mentally-ill celebrities on...
People hosting these kinds of interviews are the enablers of such delutsions like Hinton's
youtube
AI Governance
2025-06-26T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzMO6Yav3xEoh8Y754AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwuXjB61tiFqDj-cvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBg3tYIm4IP9olICN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu1ZKTpimcgSE82iV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxTtYMkkDAgRQEyYeJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdYx05LDk7Ut9RukV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwer29hRPUpLEhIyFx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwp0Sxv_mB55NaANXB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxV5zprxMKIKoVv1rh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwbw9ms4SgCmlVuuEB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]