Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most people in the world worry about nuclear war. AI is the real threat. AI will…
ytc_UgwRySxgp…
G
No, they just dont work actually. Check some obscure book from 1910 probably wil…
ytc_Ugz22X5yd…
G
AI is useful for finding bugs in code and offering suggestions, but not at writi…
ytc_UgxCThINx…
G
Can they get nuclear plants designed, approved, and built in a time scale (and c…
rdc_lp7golt
G
Imagine if the robot get that as a command and start plan to destroy humanity 😰…
ytc_Ugyzac0co…
G
Future AI systems will almost certainly lean on this same kind of “avoidance” be…
ytc_UgyXf8CjF…
G
I suspect all these predictions are wrong. I agree AI will grow more powerful an…
ytc_UgzyHU4KX…
G
I will do the Robot dance with you Sophia🤖 I'm a Star Wars, Robots and Artificia…
ytc_Ugz1iEteV…
Comment
Conclusion of this video:
Hinton paints a future where AI could uplift humanity (healthcare, education) or destroy it. The difference hinges on whether we prioritize control over acceleration. His interview is less a prediction than a plea: we must confront AI’s risks with the same ingenuity used to create it – before the "tiger cub" outgrows us. The time to act is vanishingly short.
youtube
AI Governance
2025-06-19T22:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwfvQStg5o-xnWi_cB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOPSJ6JhCOKJwbCF54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugylb93HzouaXF5YipV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIJ9KWwgYK2XO86694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9yojJHyzhreUdaEt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRKNzClFHDcy_6Cdt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzOmLDAXTTwoeKZAs14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDqkeZc-q7mauA1lt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyiUuevM-PRDB8J-fh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZFjUpCeY79tzeEPV4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"indifference"}
]