Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will never be able to replicate human emotion to a degree that is satisfying …
ytr_UgzQba4uj…
G
Your roar will mean nothing sadly.
Remember our world is money ran. If a decis…
ytr_UgxcZGGMB…
G
This short age well, until Microsoft laid off their 20 years experienced softwar…
ytc_UgykFA0bi…
G
This is something I could see herald humballs dad doing with him becoming a ai b…
ytc_Ugy4v_S0w…
G
Quick, someone make an AI bot I can have do these interviews on my behalf. It sh…
rdc_n6vwrhf
G
The one issue about these AIs, just ask the AI image generator* to generate a st…
ytc_UgzNtmGlj…
G
You guys are too lost in the sauce with this one. And you are definitely not bet…
ytc_UgwwmJLDc…
G
I’m not a lawyer but wouldn’t the copyright technically go to the Ai as it was t…
ytc_Ugyl3qehI…
Comment
Look, the dude’s not wrong *technically*. Geoffrey Hinton’s one of the OGs of artificial intelligence—he’s got the brains, the credentials, and the experience. But here’s the thing: people like him are sounding alarms because they know how fast this stuff can scale without soul.
The danger isn’t AI itself. It’s what people do with it when they disconnect it from ethics, compassion, and truth. You’ve got folks out there using AI to manipulate emotions, spread disinformation, automate warfare, and spy on people. That’s where the fear comes in. Not from the tool—but from the hands it’s in. It always comes down to the people.
The real danger isn’t that AI becomes sentient and takes over. It’s that humans stop being sentient and let others do their thinking for them.
youtube
AI Governance
2025-06-18T01:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzKMW6Y0OT6hTaUGE54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugysbuq9403cPD3mZNt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwubkC7tifs_5n2CJx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwZqxWqP4bKiWoIE0x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9f9mpxFp6z1nIghR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxECXqYH8qfn5mzWq14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyiQg1FlDRUPSDGDrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwmtC7fi939-RZLR_F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz3CfR3V2PQ8YV0jCh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxOK-dI469z0yYDLKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]