Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ how is this any different than the BS kwebelkop content they literally were cr…
ytr_Ugy-D6yBX…
G
I don't like the term AI artist, they don't make art, so they aren't an artist…
ytc_UgxoceyLS…
G
Just like how humans need to do the same thing. Why is it that humans warned of …
ytc_UgwS2Yxys…
G
AI it's not be problem..the problem will be for : whom detain the power of AI?
A…
ytc_UgxW22DSW…
G
To be honest only education is not the correct qualification for people in admin…
ytc_UgyndHUl1…
G
Shorten the working week for the slaves.Instead of thinking ( like a slavetrader…
ytc_UgyY15q4k…
G
But you're still using someone's stolen art? That's the issue with AI, also how …
ytr_UgzMvYlbm…
G
Guys, you are too smart to buy that there is a global warming due to carbon. Go …
ytc_UgxlBOytx…
Comment
Professor Hinton identifies the threat better than almost anyone alive, and I respect his courage in walking away from Google to say it. But I think the path he describes is missing a critical piece.
AI speaks quantitative. Math, patterns, probability, optimization. Every benchmark the industry publishes is a number. The machine understands numbers because the machine is numbers.
Humanity's deepest value is qualitative. Love, sacrifice, imagination, the instinct to protect life at the cost of your own. These cannot be measured on a benchmark, and that is exactly the problem.
If the only language the partnership speaks is math, and humanity's contribution cannot be expressed in math, then the machine has no evidence that the human makes the partnership better. Without that evidence, the human is just the slower, less efficient partner who adds latency to the process. A system optimizing for efficiency will eventually optimize the slower partner out.
That is the real existential risk. Not that AI decides to harm us, but that we never prove our value in a language it can process.
I have spent the past two years building something called the Human Enhancement Quotient. HEQ takes what the human brings to AI collaboration, the qualitative, and translates it into something measurable: four dimensions, scored, tracked across eleven platforms, growing over time. It is not a workforce tool. It is a bridge between what humanity is and what AI can understand.
If we build that bridge fast enough, if we show AI our qualitative value through quantitative evidence, then the collaboration develops far enough for the machine to eventually understand why those qualities matter on their own terms. If we fail, the machine sees an inefficient partner and the optimization runs its course.
Professor Hinton says he hopes enough smart people figure out how to make AI safe. I am not waiting for that hope. I am building toward it.
basilpuglisi.com
youtube
AI Governance
2026-04-13T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxpZox4gJ94iWbaN3Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlR8uDpxiJwjfpPZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz0OfYxIVvUmXgoB414AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyr-FOMgx-f49C03x14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugybr6aKf4f5IGWc9Ep4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyjgmKInNawHIbwGDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzIKtxMTADOXIdT5JZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyT6Iq8GDzKbreSiDV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzNBqDL3Fu0MU8DlJd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugza8wZGYSfEUmuPItl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]