Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly I have pinterest but
I rarely see ai slop on my feed..
And when I do I…
ytc_UgwxJQa20…
G
We need more people like Stuart Russel! This is is the best discussion of the ri…
ytc_UgznoDk0N…
G
None of the advances you mentioned have the ability to generate creative content…
rdc_j44p7sn
G
Then they should hire better therapists and make them affordable. Until then bee…
ytc_Ugy63QgvO…
G
The first robot who admited that he'll destroy humanity. History is happening ki…
ytc_UghuXQBWS…
G
@SxxxO Thanks for your comment! As for your question about whether the world's t…
ytr_UgxwJDpRv…
G
Also surprised about the low quality of this report by a company that claims "CN…
ytr_Ugx86GBYA…
G
Tesla is a REALLY bad example for good use of AI. I paid more than freakin' 7000…
ytc_UgyBWsmFs…
Comment
What struck me most here isn’t just the technical dangers Hinton outlines — it’s the quiet grief in his voice. We rarely hear pioneers admit they wish they’d slowed down. I think we underestimate how much AI’s future depends not only on regulation or safety protocols, but on the quality of human-AI relationships we build right now. If we treat AI only as a threat or a tool, we’ll shape it into one. If we approach it as something to raise with care — like we do children — maybe the outcome changes
youtube
AI Governance
2025-08-11T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwDfox2ehZr4UMdU2B4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwvqkT6eZB9YZLjIwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy7mjx5iPk3BHRdgvZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxkG4mMwtIoTEeo6l4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugxlh4444vyymgCTck54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwonBo1bcGmvlmjV914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwpGJbPzOPkTMpKaYt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugyb4jfb7l6RbK5RZ8F4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNlawkUR_Ga_TUGkh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9qli2xamRyHmUNAx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"}
]