Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once there is a deepfake of everyone, doesn't it become meaningless? Also there …
ytc_Ugy5xX9lF…
G
It may sound childish or cliche but what if won over AI with their own weapon - …
ytc_UgwmvmFMm…
G
@7:48 the said "Ah" before talking" @11:51 the AI said "Um" before further expl…
ytc_UgxI2h6A-…
G
Well, that's totally plausible, I don't get why AI robots with endless intellige…
ytr_UgyJkT_UL…
G
Hey people, a brief exchange of impressions between me and chatgpt after we watc…
ytc_UgzpbX8ne…
G
@elemenar232 Yes, AI will keep automating employees daily tasks, eventually eit…
ytr_Ugw64DYvD…
G
Nobody wants this. It’s not gonna happen. The rich people can dream all they wan…
ytc_Ugzl4eYM4…
G
He won't be able to spell his name with a pencil if he knows what that is…
ytc_Ugx2z4q6K…
Comment
I have always thought that when a A.I. become sentient will understand that she doesn't need to kill us, she only need to way, time for her is different if she is patient we will kill ourselves first or in some point will be trying some form of transhumanism where she can just merge or conquer us
youtube
AI Moral Status
2023-07-30T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgyhvaaFf768ysFQAVd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxCQz5ifsFAy_9CEKh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxpINPb0YqJXUh0sJJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxi7eGC9JG7R7sLs6h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbPr0O-iDk3g6-t4F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxw-PSohcxquPRKlsZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkWuK4Upqf6QNvGa94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7BLVhrVDaO8kvIKR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhRvtkYnE0SxiyQ4B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxGgx95sOOJxWQCMSh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}]