Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
However, you choose to slice and dice AI prospects, there needs to be a change i…
ytc_UgyzprI5x…
G
the ECHO CONSERN i hate to defiant AI on this one .
but can someone tell me
" 1…
ytc_UgzM4NnaR…
G
There's a 4th option that AGI isn't cost effective, which makes it impractical. …
ytc_UgwurTSRl…
G
We’re heading for a virtual economy , no one really works but checks are issued …
ytc_UgzPtn0v5…
G
Good, I hope we win.. Wait a minute... I know he'll win! "
Fight for open AI to …
ytc_UgygE0Nj8…
G
If nobody has an income then who the hell is going to pay for any goods or servi…
ytc_UgyTSQonb…
G
@nathanuncentered6172 I work in the field and have a CS degree. I'm not claiming…
ytr_Ugxs2kHxv…
G
I always treat chatgpt with respect. Now amazon alexa on the other hand can get …
ytc_Ugw6RR-B8…
Comment
Hinton is brilliant on AI safety, but his blind spot is governance. He criticizes capitalism without recognizing it’s an emergent system based on voluntary exchange, private property, and rule of law. It’s not top-down control, it’s bottom-up order. The irony is that he fears uncontrolled AI, yet distrusts the only system we’ve built that channels self-interest into innovation without central planning.
youtube
AI Governance
2025-06-23T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyOMvdeggdM2yIYCfJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPo0SoQKF1E2CYLyh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxb7eyW-Roc410kCfl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyh5XnF9XRjwHuO59J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxuKVgwIRPxC8Hfa8J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUG-10aOIXcJZf_Op4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxrov0xKmPqxVjXgTh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyJFcq5HBwbUJt8rGB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxdE1SDcw5ZVhs1rTN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugxjtm6yh-VZzmb7jDR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]