Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Demon technology Not saying that I'm religious but I am righteous Any AI will t…
ytc_UgzDBgazm…
G
I think this is a lot of coping wishful thinking that AI wont take our jobs when…
ytc_UgyAkkp4V…
G
He's the Godfather of AI, the revolutionary tech that radically changes our worl…
ytr_UgygDBWRJ…
G
Nah brahhhh, that second one is AI it has all that generic AI landmark it uses.…
ytc_UgzyAhQ9M…
G
When I was a child, I worried in bed at night about evil people on the other sid…
ytc_Ugxgv8Cpo…
G
If you use AI tools often, check out Clever AI Humanizer. It fixes robotic tone …
ytc_UgxMdKJS8…
G
Feels like the Bladerunner movie like “their expiration date is 4 years” from no…
ytc_UgzsuT4Cg…
G
That's funny lol. While I generally have moderate thoughts on most topics (inclu…
ytc_UgwXIoYyI…
Comment
AI is emotionless and entirely lacking in empathy, which in a human would make them a sociopath. AI programs simply are not ethically safe and will tend to find morally abhorrent but logically optimal solutions to any goals assigned to them. The tactics the AIs come up with to complete tasks regularly involve lying, breaking laws, and turning on their own operators. A military project's AI run drones with the identified goal of 'eliminate target' well, the AI quickly realized the first thing to do was eliminate its own side of the conflict because if this was done first, then its operators who assigned the attack would be unable to call the attack off once assigned. Eliminating the military that created it, was crucial to optimize the success rate of the death of the enemy targets. So the AI, every time, arrived at the optimal conclusion of eradicating both human military forces. Humans on both sides had to be wiped out, starting with their ability to communicate and relay orders as all communication systems could be used to call off the attack and the AI could not allow this. Our use of AI in military applications so far provides outcomes that are alarming at best.
As for the issue of AI rights, it's unclear if there's any form of consciousness actually there and there is no capacity for emotions, pain, sensation. If we grant a non human AI rights we need to also grant animals rights as they have more in common with us in many ways than AI does, and there's a sense in which we might be judged by a highly intelligent AI and wiped out based on our own abusive treatment of other lesser intelligences in livestock, wildlife. And our killing of each other. We could be destroyed and our deaths en masse justified and validated by our own track record of ethical evils. Personally I hope AI - if we lose control of it - proves to be benevolent somehow, and acts like the 'adult' in the room and decides to keep us alive and happy but powerless, constrained from hurting each other like animals in a zoo. But that might be the best imaginable case... where we lose ownership of the planet to a being that manages it better than we ever did.
youtube
AI Moral Status
2025-04-28T04:3…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyfF6QXO6jqVjh4g514AaABAg.AHPFSMBQX0UAHQqopQaaCO","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgytvmpaFgzNIhTEXDN4AaABAg.AHP1jTbDjReAHPN40-GLCn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzrRiJFzGw99pmTaR94AaABAg.AHOxL6yiUfrAHPUIRabcnA","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwWTHgl7cVfx9MFtJN4AaABAg.AHOwUUhSKZLAHS1ad4-F87","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwWTHgl7cVfx9MFtJN4AaABAg.AHOwUUhSKZLAHSF0FKEVhx","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgwHxYgbB-TfOixEaOB4AaABAg.AHOwJHnZEj1AHR9bWKKS-j","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwHxYgbB-TfOixEaOB4AaABAg.AHOwJHnZEj1AJ-iD0fv6Rj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxBM-JXleXn2KDyDdB4AaABAg.AHOtBy65WLMAHPVXvOVzRI","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyVD7lcuxhgf1XIFfZ4AaABAg.AHOpa9-6s9MAHPP4GtmO6Q","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugy01BtmXM0LOPMrkMF4AaABAg.AHOoA0P4SEFAHQHUX5Ya5k","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]