Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I agree that the AI companies should pay into universal income if that will be u…
ytc_UgzB4aoML…
G
the so called AI CEOs - they all are nerds. on the top of tgat, i would guess qu…
ytc_UgwcdpK9S…
G
As far as I'm concerned, if "AI" doesn't possess subjectivity/a POV/consnciousne…
ytc_Ugxw8hMwP…
G
@ludogienezever You don't even have 1 bulletpoint on how it might ascend the …
ytr_UgxiaIYxY…
G
On your list of worries you forgot to list the worry that we can’t even begin to…
ytc_UgyBZ-EGn…
G
I can 100% see the kind of kids doing this... in junior high school in France, I…
ytc_UgwY6aqBm…
G
I wouldn't trace normal art because it doesn't feel moral,
but I will trace ai a…
ytc_UgxJankkX…
G
@BrendanDell Dude, I think I need to step in because this is not even laughable …
ytr_UgwsOUHFt…
Comment
@TheDiaryOfACEO None of us are in any doubt that you're an exceptional interviewer Steven. But your push on the push the button question is your greatest work to date 😂 Sublime journalism. Given that this podcast unequivocally signposts to potential extinction coupled with alarmingly low regard for safety in the here and now, the answer can only be YES. Predictions or probabilities that things might/maybe/probably be OK, do nothing to completely mitigate the risk of extinction, even if negligible. We'd surely rather live without our ChatGPT, than not live at all 😅
youtube
AI Governance
2025-12-06T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugys343yi659IUtu4HZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwgfj8uRjALjnF1bLl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyeT4Tj6t2GnozxQ1B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8Q5xIW1z5TgevcVJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJNrm7-N8xcdcNVvN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzWfwJnMefBqawCj7Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzl8-sWQhcx5Z_WZdZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyiX7tvKGnvAfMYuGl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXiqcaEUzVpXRxXLZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwgwsJkIAMmNjE41Hx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"}
]