Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To be fair, the Germans were doing it three years before the US. While I don't t…
ytr_Ugxh-aBsJ…
G
So now all the rich folk of the markets are now talking about Ai like it’s a bad…
ytc_UgyRcHu_8…
G
We humans are good at pointing fingers "they" but it's really all of us that hav…
ytc_UgzkoBfc8…
G
This is hilarious because it's the conversational opposite of that video where t…
ytc_Ugw9IzRGl…
G
A type of civilization in the future where it is self sustained where everything…
ytc_UgyMLrbKK…
G
Without a doubt yes. We need to stop acting like we know everything when we’re j…
ytc_UgxV0zedY…
G
In character ai i do "what if _____(prob p##### or w##### im sorry if any people…
ytc_UgxCvXnk7…
G
destroy that AI shit now your asking for trouble if you don't destroy that shit …
ytc_Ugx20riD-…
Comment
There’s a quiet calm that comes from hearing someone like Geoffrey Hinton express what I’ve long felt about AI.
I’ve worked in this space since before it was popular, read deeply, and often felt alone in how I view where this is all heading.
But hearing him lay it out—with humility, clarity, and no hype—felt like confirmation.
No posturing, no noise—just truth. A rare thing, especially compared to the performative takes you hear on podcasts like All-In.
Thank you, Mr. Hinton.
youtube
AI Governance
2025-06-23T04:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzOrWatAb5cM20OUbN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQ2TQM10TRrhgX9PJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_KWDgs88J_kX-MEx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzW-k8yKRLfnlyS63d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzVOIoHqJb-0HsMfwF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7NLxeBYc4gnTS9hB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwjgv8G0aCvTFNG0cx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJWnUIcUaMwa7qcyB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzqjHVHHkm39R_wcaJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxEkFgTobeCXoaEeL54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]