Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LLMs do not have:
a stable inner agent,
a consistent set of beliefs,
goals th…
ytc_UgyarHndj…
G
Ok I didn't go that far with asking it to ask me a question but it did ask me if…
rdc_mw7ndoa
G
AI will be the death of humanity. It’s being prioritized too much in the US work…
ytc_Ugw66ScST…
G
Wolfram is very smart, and maybe, too smart for his own good, to admit Yudkowsky…
ytc_Ugy8QpWKb…
G
next thing you know, someone uses this painting to train an A.I. hate to say it,…
ytc_UgzVmWPdn…
G
As soon as these things become affordable to the average man, human beings are g…
ytc_Ugxo8gelp…
G
🤣😭☠️ right? It’s ok for the government to own FB, TikTok, Twitter and silence an…
ytr_UgwlV_rWF…
G
In the case of videos, the government could simply force corporations related to…
ytr_UgwPnWLdy…
Comment
Watching this on Jan 2026 and realizing that, only 7 months later, the so called "Community Notes" throughout the video, 95% of them, haven't aged well already. Scenarios being "purely speculative" already damn easy to see they arent purely "speculative". Not based on my opinion, but on more recent epsidoes on this very same channel, on the very same topic (dangers of AI and AGI).
Mind boggling!
youtube
AI Governance
2026-01-09T15:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy-uL4FUYpmJHG_OJB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwLxgC3UHrVeM5KCIZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzMFoZ4tb-DSNwThs94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxNa56OMptUJ9IT-Dx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRQ2CnjWHRJK9FNkV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxasmKyc4iSqCAJCGB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzi1OkWs6QKNaQ8tgF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwX3scK61wHDwDs9ZV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyKJdhH7_fXKC_MQbx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyBdoKzAPyTfN5t8FV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]