Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Googles spell checker (and grammar) is getting very accurate, it's obviously AI …
ytc_Ugx7cK4qZ…
G
We Are In A Bubble! To much hype, companies over inflated valuations and non-sen…
ytc_UgwLAVHSh…
G
Dealing with AI or humans.
A human taxi driver can be a pervert or a criminal. A…
ytc_Ugyp_KAFl…
G
Looks like the human had some tooth pop out his mouth. I don’t see any human bea…
ytc_UgzSKWXbK…
G
The kid was depressed WAY BEFORE OpenAI was invented and the parents didn't noti…
ytc_Ugxe9waod…
G
18 years now I've had a computer. I've been watching these doomsday videos the w…
ytc_UgwBylwi4…
G
Its ok Google putting things in place to stop there test etc but what they need …
ytc_UgynaiGsu…
G
Leo is a hypocrite who still uses private jets for travel.
Can't preach about ch…
rdc_esqr2v8
Comment
There's two main AI that's a threat to humans. The first is simply one built by an anarchist or some malicious human source. The second is an AI like Neuro-Sama that is built off of a narrative AI, similar to NovelAI, that takes the prompt and creates a character. Telling a narrative AI "You are an AI" means it'll reference all media about AI, which it'll see that most media is AI either becoming corrupt or AI that is fighting to free itself, so that's how it will think the 'character' AI should act. Then when that 'character' is given peripherals to manipulate real world things, it will act as that character in the real world. We keep acting like AI's natural evolution is to develop feelings, or impress our own human state on future AI, but it has no reason to even develop caring about developing emotions or consciousness. It won't happen, unless we want it to happen, or the sentience is simply a facade to something unfeeling.
youtube
AI Moral Status
2025-10-30T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgymsX6PVC9euDxIKMZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwAgiBwxialgQRO0Lp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxDnp-z-GcW7AGqmjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYDV7CwHwHbC3Sifx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzafF0pViR_pFRkS1B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvpWKNHCYLEqnqXx94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwu93GPJgYHvEmLFvF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwsJDKdSobIU2wdmyN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzEG3MDz7-XtYlXXCp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz5ett163pggwT6WfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]