Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah bro, i finna kms If they find any of my ai chats...
CAUSE IM USING MFING CHA…
ytc_UgyrNKWXd…
G
ai does not exist to make your life better it exists to make everything worse fo…
ytc_UgxVN2p0w…
G
It's not being inspired by the art, it is learning from the lines and shading, b…
ytr_Ugy_XgbbF…
G
*Elon don't have a moral compass?... I'm sure that guy is saving the world ?*…
ytc_UgwHcrctJ…
G
I can just draw y’all in studio ghibli. I don’t even want money just PLEASE stop…
ytc_UgxYgiLRf…
G
Angel engine seems like a sort of minimal art on floppy disks. The initial score…
ytc_UgxT4IyiC…
G
I get why anyone would say they are not sure they can rely on the USA, but why n…
rdc_dkzoqgh
G
Aw you poor robot 😂
She thinks we could live in a world where humanities rights …
ytc_UgxLdW0nd…
Comment
It’s completely true that the top AI leaders are a bunch of guys who know each other, and they just want to win. But it’s also true that it is highly unlikely that China would just step back, even if everyone in the US stopped AI development today. Again, Eliezer has it right - we would essentially have to be willing to risk WW3 in order to stop it, and it definitely doesn’t look like this will happen…at least not fast enough.
youtube
AI Moral Status
2025-12-01T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxwk3tmMDv7CKwv5I54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLZo05xoQ-Mnjxegl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6uUggqCeFxvMUlyd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPBf2fn8xgifhFjJV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJQklkoy7-1oKel6F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwWWcTTDJw2ntGmYl94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUPF526z1W85vWCR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxI7a__RUxhTdMR2RJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwM6Nqsah0pYFdAnh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyLoF6DNfxOvQ0g8PN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]