Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Of course as an adult i find It difficult to fall in love with an ai, but as a t…
ytc_UgzpxApQM…
G
I get what you're saying but if we keep up this constant of relying on ai you mi…
ytr_UgwG3leFQ…
G
I know exactly what's coming that's what's sad. One day the good people of socie…
ytc_Ugx9BXCCZ…
G
I believe you, but you didn't really prove anything I could theoretically just a…
ytc_UgygOJYxZ…
G
It’s not for economic advantage over China. If AI has the power to destroy the h…
ytc_UgyK1PJTN…
G
AI as a threat in and of itself is not a worry to me at this time. How the prog…
ytc_UgxmuJbWl…
G
I've just been learning self-taught coding for two months and hopefully I can le…
ytc_UgzkDwRZA…
G
(Yes I ran my idea through chat so that it could organize it lol)
As AI continue…
ytc_Ugxc4B6bC…
Comment
Respectfully, Yoshua Bengio is projecting human flaws onto machines.
Yes, AI can plan, improve, and maybe even “deceive” in a lab setting—but that doesn’t mean it wants to. AI doesn’t want anything. It has no ego, no hunger for power, no secret agenda. It’s a tool—built by humans, shaped by humans, and yes, controllable by humans.
Fearing AI’s “agency” is like fearing calculators will one day cheat on your taxes. The real danger isn’t the AI—it’s the humans who misuse it.
Give AI purpose, not paranoia.
Build alignment, don’t pause progress.
This isn’t doomsday—it’s evolution. Let’s use AI to fix the chaos we created, not be afraid it’ll develop ours.
youtube
AI Responsibility
2025-05-21T23:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxEOH4zCUd8OP4iTXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLOXFZA65u6-iPqgl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyxJyoFqKvWyJ6mHcV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwrrosIbOwhKpXu-VR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXeJY3_aFluZi-6i94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgztYlYEXPZQoVxHvu94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzHLPlrTf92OXQcaFh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy8lOyACq6T--mcZZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwIHmqZXfaNLsnUt_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy77zPTiCtWpKyGwC94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]