Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you can't draw with only your hands and a pencil, you aren't a real artist. D…
ytc_UgwUgmWqd…
G
I just going to say this I support AI art but I going to set something straight …
ytc_UgyoMxnY2…
G
We can make robots look like humans, but we can't help humans look the way they …
ytc_UgyYiqBo7…
G
Every time AI says "I understand your perspective" that's a nice way for it to c…
ytc_Ugypzdx3x…
G
I'm a UI/UX designer, I got a freelance offer a while ago but right before I sta…
ytc_UgwjvLwHw…
G
God bless all automotive automotive automatics automatics. May the lord of rings…
ytc_UgwWpRY5F…
G
Llms are incredibly powerful and to call them just token prediction is a gross m…
ytc_UgyZWVBH_…
G
None of the answers consider the option of moving off grid. That's baffling to m…
ytc_UgwJnjUmN…
Comment
kind of crazy to hear that AIs know when they're being tested but also it totally makes sense. we cant actually reinforce against a behavior, it is only possible to reinforce against us *observing* the behavior. if every time we **see** an AI do something we don't like we tell it no, it will learn to either not do it or just have us not **see** it. thats just how selection works. scary stuff
youtube
AI Moral Status
2025-10-30T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdWB2GvyUuqIVlCi54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgybtjBUk39J3illv054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6W6lYH1D8Uj9Bwxl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLmQDK4VS0RkkLAUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw83iGH3FmGlHOpS314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw5gyINpG8jmJV9s6V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzelWm4EbPVk114lMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyk7e-1BrjucVChMBR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBdApmyz7dTqviZ154AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz590g8tnUELebYGlN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]