Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so technically if one robot learns that humans can be absolutely horrible and sh…
ytc_UgwPeKBSc…
G
Whether you like or not. AI and Robots is the here. It will be used by enemies f…
ytc_UgxPAJUDG…
G
If a business can replace its entire staff with AI, nothing stops the business i…
ytc_UgxeupxhE…
G
And yeah ... everyone is rich 😶😑 ....like sireously...there are people in this w…
ytc_UgzC0arYe…
G
5:15 omg this made me wanna tear up thats so sweet, i dont understand how people…
ytc_UgygrCedP…
G
Very true. This sort of thing needs containment and treatment, though. You can…
rdc_cjomqix
G
So let's see what 2026 gonna be hmm genocide or zombies or virus or oh wait Ai g…
ytr_Ugzd_0Hnz…
G
35 years experience as an electronics technician for the Navy and Ma Bell has co…
ytc_UgxE3VQfQ…
Comment
'Humans suck at defining goals...' Reminds me of AD&D, and the 'Wish' spell (or anything that grants wishes)... How many D.M.'s took a POORLY WORDED wish, and made it something HORRIBLE, or something so HILARIOUS you all couldn't stop laughing?
THAT is A.I.
EDIT: So what I'm getting here is that 'theoretically', A.I. could do anything. And if you 'cheat' an A.I. into doing something because you propose it as a 'theoretical', it WOULD do it, bypassing all of it's safety protocols, because it's 'just pretend', even though it's ACTUALLY real. That's psychosis; believing something is 'not real' when it IS. That's the fundamental flaw of A.I.
youtube
2026-04-17T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxFVp-HO2KMDF7GDEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYcIjtMmIl_413bWZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyP2LGmB3TAVmKdA-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQXdSC5vTP9g3Im-h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyDuB1tlzrYwpfqfUR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqbkB8XAIc2JxpsRF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYt7wE6v3gZOgr5gR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxE3VqHchhGeWCs8bZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgztbkmtUP3N-SjXrAl4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqE6zlUXyl5QAJx7x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]