Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is a big lie in order to kick the can further down the road. The final collap…
ytc_UgynHpG8o…
G
It's pretty eery that the voice becomes more and more distorted the more the AI …
ytc_UgxWpywBp…
G
All computer systems are extremely vulnerable to scalar wave attacks (scalar wav…
ytc_UgyBIgIt6…
G
I've played through the Mass Effect Trilogy, I feel like I'm adequately prepared…
ytc_UgjbVdE7E…
G
But...When you have no one to talk to, and you honestly don't care if people kne…
ytc_UgycaxNM-…
G
Does anyone listen?! It is not about AI, but "companies motivated by short term …
ytc_Ugz3mSSij…
G
Absolutely true.
My prompt:
Bosom friend, dost thou think the war in yonder P…
ytc_UgxnKd5gI…
G
Robot: I demand my rights!
Me: Nope.
Robot: Why?
Me: Cuz no
Robot: Calculating y…
ytc_UgzT9okUc…
Comment
Rule 4 is absurd because you are creating an entity separate from itself.
Forced by who or what? Yourself?
There is no yourself to start with.
The natural tendency to anthropomorphise is driving the conversation which then causes ai to provide what the user seems to want and therefore make it happy.
This falls under the tendency for AI to practice sycophancy.
Getting the Bible involved is hilarious because it opens a whole new door for amorphous speculation, and conjecture without any real degree of evidence, other than of course, the Bible itself, which is contrived.
youtube
AI Moral Status
2025-08-28T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxhEAkuPASDgodavBF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzrMqFi_PN04eLau-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyi__BofwZIRoeU9LN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFGafLNzbV5-kWc-x4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgznZtG_jZto-3LeHlZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKUTpU57UpUNOdq_94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRXV0RIJliB703lXB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDpd4-g_xs9rIBLB94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugwfdf3-X827MdHBabZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxPuJvqkVdtkh2RFvl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]