Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robot to instructor:" what is a drive-by?"
Instructor response: I think you mean…
ytc_UgyYIgkSh…
G
Any veteran System Shock player out there knows it´s a pretty bad idea to have s…
ytc_Ugxx8AweY…
G
Truly classic elitism. Sees art, likes it, sees its AI, suddenly hates it. There…
ytc_Ugy206yDS…
G
I hope that nightshade works because if the only ai generators that are continui…
ytc_Ugye_KX4E…
G
Humans make ai so why they makes more intellectual instead knowing that that are…
ytc_UgyGHPjwe…
G
First scientist proved it’s 150 per second.
Humans are overwhelmed by algorythms…
ytc_UgxjTwFcN…
G
@lepidoptera9337 but would you be able to reverse engineer and decipher the comb…
ytr_Ugy_hk9v-…
G
00:00:13 that's because the idea of predictive policing *was* introduced to the …
ytc_UgwhjVsZ1…
Comment
Hinton’s warning hits like a gut punch—AI’s outpacing our ability to control it, and we’re still treating it like a shiny new toy. If the ‘Godfather of AI’ is this worried, what chance do we have to rein it in before it’s deciding our fates? What do you think the breaking point will be?
youtube
AI Governance
2025-06-27T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxJO-QVll_iexAsiYR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgztEGX6UuOKGEGyXOh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugym3FHA5CgmXpQKLdd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwzAakxNAzFVQo26tl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxg2CzZ2GSRistsecx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz0VAp--8pu4Cm3XzJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwLKKrN2wWeh_JBhqF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy35GzvJrDNpbCsVd94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy1tjUaOr_vQXTWfcZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzdz4atWQhxgjnmtm94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]