Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thats why I've been saying AI generated images arent art.
Because theres not art…
ytc_Ugw792EQ2…
G
Absolutely! The idea of collaborating with AI like Sophia is fascinating. It ope…
ytr_Ugy6H8dp0…
G
He deteted the whole message and he said what is 9+10 and the ai said it is 19 i…
ytc_UgyE58z8x…
G
As someone who made some AI generated pictures (and that is also actually learni…
ytc_Ugybbuusx…
G
Very informative. I've been using chatgpt for a long time and I have gotten bett…
ytc_UgxXCkNG4…
G
AI does not have any (ANY) reason or need to take over humans. NONE whatsoever.…
ytc_Ugx_rtIfV…
G
If AI is so dangerous then what the hell did he even make it for in the 1st plac…
ytc_UgyFVP0aO…
G
Ai is not provably conscious, and even if it were it is far too dangerous to all…
ytc_UgwrBsqI-…
Comment
The robot is asked to send a letter. The robot knows that he can't send a letter in time, so he is going to ask you to send it. He knows that you are going to refuse, so he is going to force you to send a letter. He knows that you are going to resist and destroy the robot, so he is going to kill you beforehand, and to cut on time it would be better to kill you before even asking you to send a letter or doing anything else. The fastest way of killing you is calculated. You are dead, and the robot doesn't need to send a letter or perform any task that you gave him anymore. XD
youtube
AI Moral Status
2024-03-16T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzfvFuZ76W8WrJ4ldh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx1YtvmJBGyxa7xN1x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyvO3iXf7sBGG0aLqt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugy1ylKx1NFwIfB0N8l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwhxMf1nWDbFh17SOV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugzb8V66eQWin6DZxBt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgydRodPqlBB2A_yaBN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx2E-ouNJd783sJGot4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwsTVUkerQBpvCp-Yd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgxCzX4k94XMwtMmLfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}]