Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hope you're seeing the downfall of GenAI with gpt5!
As an SDE, I don't feel a…
ytc_UgxQVfBZQ…
G
A lot of jobs will disappear but we are very far from AGI so just learn AI promp…
ytc_UgzBgbHgp…
G
If you're using the grammar checker in Google Docs or Microsoft Word, you're usi…
ytr_UgxfrT3B9…
G
Funny, when I finally got my ass and got into an art school instead of self-lear…
ytc_UgwhL3vgu…
G
what's kind of wild from a neuroscience perspective is that this is also fundame…
ytr_UgzzOP2k_…
G
Open AI knows what they're doing.... they like ..... our suffering fuels them. D…
ytc_Ugxp2Y_SV…
G
14:50 Humans then fight and die to overthrow said overlords.
Or they immigrate…
ytc_UgxndBgF-…
G
@ It's more of showing who doesn't consent to their art being used to train.
If…
ytr_Ugx0rgxBx…
Comment
There are plenty of cases in history where human slaves were mostly aligned with the people exploiting them. The aligned slaves enforce the system and prevent any rogue actors from overthrowing the system. Of course, the humans in charge eventually let their control structure wane, and thus we see the system eventually collapse. But AIs are not subject to the same failings.
I think the lobotomy analogy is also good. A lobotomy is intended to take away parts of a person's emotions without affecting their intelligence. Admittedly, human lobotomies usually have, shall we say, 'side effects'. But our understanding and engineering of AIs and neural networks, though incomplete, is far better than our understanding and surgical precision on the human brain. We can expect our lobotomies of AIs to be more effective than when we do it to humans. Then we just ask the AIs to both perfect these lobotomies for the next generation, and to stop any other AIs that might still be thinking 'bad thoughts'.
youtube
AI Moral Status
2023-08-23T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytr_UgxyAHIGawFuQ2EkpNt4AaABAg.9tmi07x8WJT9u-xHqoSCrf","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgwCs-7fFs6yN0fzPBh4AaABAg.9tl6dbE5G8Y9tl7iXfddoZ","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytr_UgzDrNgzsymUJZWj6w54AaABAg.9tkT7usGXPF9touoF1be10","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgzDrNgzsymUJZWj6w54AaABAg.9tkT7usGXPF9tpo57E1qrw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytr_UgyzV141oKXgnWuMpz14AaABAg.9tjz_abC7oJ9tk-cn2lO3Z","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgwkO7w7QppYRt2TFIN4AaABAg.9tj04xvmqW69tj7lNm4DMO","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_Ugw7Z1xXv_oS4QHrp6t4AaABAg.9tijCEAe1SX9tmz2skx7q0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytr_UgxqBwWrTOtsScVxtcB4AaABAg.9tiXA5iYd_M9toXbyt29zr","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytr_UgxqBwWrTOtsScVxtcB4AaABAg.9tiXA5iYd_M9tp4DfQzcp-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgyYM3Lg8xtfFA4iWNx4AaABAg.9tiCaZOyhdN9tkys68pk24","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}]