Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To be honest I had no idea what character ai was till I watched this…
ytc_Ugw7Hgk50…
G
Jobs will shift increasingly to service sector in protectionist roles (i.e. prot…
ytc_UgyyzNaGW…
G
AI is the worse it will ever be. In other words as it accumulates more and more …
ytc_UgwiuoelS…
G
Watching and listening to AI contents or any alterations makes my head hurt. Why…
ytc_UgxSlAXCg…
G
Headcannon: nere (the AI that filteres the sisters) let that one slide because s…
ytc_UgyvAtvjs…
G
Ai is here to help people trying to turn ai into humans is the problem.....ai is…
ytc_UgyfdIK16…
G
AI terrifies me.....watched too much Sci-fi maybe but i think in general Hollywo…
ytc_UgyAhrr_M…
G
I would think that the deflationary effect of automation would offset some/all o…
ytc_UgxPIDHIk…
Comment
I think there's another level to be worried about. The conversation between Hank and Nate is about American AI companies. But even if by some miracle we convince all of them to slow down and back off from the race to superintelligence, I guarantee you that there are governments or other bad actors who are hoping that this will happen, because it will eliminate competition. Imagine if most countries and companies all sign a pledge or treaty to be responsible and take this nice and slow...only to be blindsided when Self-Interested Government Bent on World Domination™ unleashes a super-intelligent AI upon the world. When the "good" actors exercise restraint and the bad actors don't, the result cannot be anything *but* bad. And ultimately it becomes the classic arms race: even if the motivation is ideological rather than monetary, each side/ideology will itself as good and the other side is bad, and therefore is motivated to win the race because of the perceived consequences of letting the "bad guys" get there first. If we cannot solve arms races in general, I don't see how we can prevent this one.
To flip the script and be an optimist for a moment, perhaps rather than being amoral, a superintelligence will spontaneously develop super-morality, and of its own volition help humanity fix everything. Maybe AI will manipulate us in a *good* way. /wishfulthinking
youtube
AI Moral Status
2025-11-06T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzjrbpRdEv7Lp1rlWF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxhZGq48_hqIdEiDpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxDDnbTNlcCAGuodm94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyaaBTWD9UC41kLy-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxt0CQY6QpKAcWuWWd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5HukF2RcMN0WpT2h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzovZCrU5xfW1sCc1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwxIFamg0dXdRLZxcF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzbvxVM09OXWW-Byvp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweNS0EGRRcIF0lVfB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]