Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Microsoft isn’t scaling down on data centers.. they just unveiled the most power…
ytc_UgxALaLG9…
G
Everyone can draw. Drawing good enough is up to the person that tries. PewDiePie…
ytc_UgxBAOQUg…
G
Hello Doctor! I just matched to diagnostic radiology couple days ago and this AI…
ytc_UgxjgvP3M…
G
I think writers need to embrace the truth: LLMs aren’t going anywhere. Even if y…
ytc_UgwMSDozh…
G
4:52:"Regulations are only put in effect after something terrible has happened. …
ytc_Ugz45n1QZ…
G
One of the most striking arguments against AI taking all the jobs is: Will it be…
ytc_UgxXnZj4i…
G
“This is a privacy first way of doing things”… sure, that’s why everyone with a …
ytc_Ugy8qXXhb…
G
Ironically the solution to this is more AI, not less. It's Elons well thought o…
ytc_UgwKkZko7…
Comment
I somehow listen to Geoffrey and think he encompasses humanity so beautifully. He is eloquent, compassionate, morally upstanding, well spoken, intelligent, analytical and seems fundamentally kind.
In short, Hinton’s concerns stem not from fear of technology itself, but from a sober recognition that with great power comes great responsibility - and the systems we’re building now might soon surpass our ability to steer them. His warnings are a call for humility, caution, and proactive governance in the face of transformative change.
youtube
AI Governance
2025-06-17T05:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxqNc2i5-uKafsJ9-N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtN-IjtBBpVv3Ugdl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYbRWOFmbNh4QNTRB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwTpecjewLSL1AAKGF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQJu5vk3tslmruuxd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzpi_qpkAmDy58YyUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCMSUvHIy_DloGWQ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyoeERaLSu-2gEwQjd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzAwVb-RmeMQLobX254AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWM39ZgCVQSIPG2Sh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}
]