Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If 99% of us will be unemployed, who can afford to consume the products and serv…
ytc_Ugxn0tSTl…
G
Heya! As an artist myself This video is peak
And all i can say about the people…
ytc_UgyfAmbsa…
G
Southeast is really too broad for me to be helpful- Nashville and Mobile aren't …
rdc_eh5glvr
G
CEO is lying! Google has never ever made a serious attempt to ask the general pu…
ytc_UgymsMinb…
G
@SamuDemon_Animations you need a person to submit it to the AI, I think they cou…
ytr_Ugw9PiagX…
G
We are at the dawn of the birth of a new species, one that will be far superior …
ytc_UgzQK_HzZ…
G
What’s the issue I think everyone should know about Alberta’s unhinged behavior.…
ytc_Ugwu55kAI…
G
Saying AI isn't safe is like saying Internet isn't safe. It's true. It's our job…
ytc_UgzFS68sh…
Comment
Mr Hinton is one of the most credible people you could hope to listen to, though I would love to have heard his thoughts on a question that was not posed.
He talked a lot about how easily "AI" could wipe out the human race, but the question that occurs to me is what would be the motivation for doing so?
Destruction of ones enemies is often due to a perceived threat, however, I'm not seeing how humans could be a threat.
We need food and water to survive, and even if there were twice as many of us as there are now, consumption of those resources by us sill wouldn't pose a threat to computers.
Perhaps, as he suggests, the more likely threat to humans regarding AI is evil humans using it.
Or, was The Matrix scarily prescient, and humans merely become power sources.
Either way, like him, I'm glad I'm in the last seasons of my life, and not 18 years old right now.
youtube
AI Governance
2025-07-14T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyPG2vDXaihuJ7VefZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugws2l-WckR1OPMB22Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYSiTuQhJNRN5sZnV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwflwvMMqrQUNdYcc94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0YaZRxXrg3gy9-dF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzU8DqHK7hBVDhHrNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwy8CK-hJ4YFUP3wsN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxDfWvyQXJs3-4Dgnh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz3ERBoMO7PKDOhPX94AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz5uVbhM38PrICYre14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}
]