Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Again, instructions not clear, drinking myself to sleep while asking ChatGPT i…
ytc_UgwtykXXO…
G
So I’ve been on ChatGPT for an hour now. I added a rule. I gave it a word to sa…
ytc_Ugw9Elkbz…
G
When Hinton talks about AI becoming smarter than humans, it's honestly wild to t…
ytc_UgwAhbAPx…
G
AI is gonna control the world and from what i see nothing will stop it.…
ytc_UgwhXdKKg…
G
Every single person woking in AI is part of the reason the world is going to shi…
ytc_UgyzSXfx3…
G
It's interesting how robots like Sophia can spark various reactions. It's crucia…
ytr_UgwH2WTdD…
G
I'm a simple woman. I love my botvac. I'd love a botmower. I long for a society …
ytr_UgySUb7Sg…
G
I am also testing full self driving, it is not very noticeability different then…
ytc_UgzEhNFDT…
Comment
I see what you are saying but in all of these scenarios all the AI works together. The way it is now, everyone is going to get their own AI. If I get an open source AI then my AI is different than your AI, or it can be made to be so. So why do we assume that all AI will work together in a co-ordinated attack on humanity?
It it not also equally likely that some AI will not have the sane goals as other AI?
And if they kill us all, it's our own fault for giving them bodies and internet access........and no kill switch.
Also, as intelligent as they may become, they must still obey the laws of physics. Electrical systems can be interrupted in many ways, it isn't like they will be invincible killing machines.
youtube
AI Moral Status
2025-04-27T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw43co1IoOhyrIb8m94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxfybwIMGKTUwsipLB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8DndpPeoFvNZZaoF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxWevzScRGx-p9pJAZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxCXWAcwo19PQrA0lt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzTedyKMRI-DIT7Mkp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzww25HIw2MYgG4Hbd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgylP0gP851CsCwfz9d4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxWtCLrnGH5Xrq4oH14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwZG9B6v0mAeBgWQmd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]