Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's going to be important to remember that the purpose of software engineering …
rdc_oi15ld6
G
Good luck complaining to trump and Republicans. All these companies have paid Tr…
ytc_UgzlCJ-Ot…
G
saying that you learned html and css for fun is the biggest flex for these smoot…
ytc_UgznnIOCL…
G
true, so much of Marketing and Sales is pseudoscience already, so rational-sound…
rdc_n7y3otz
G
I dont really see why ai matters when previously people would remember stuff onl…
ytc_Ugz4PoOIk…
G
Frankly the "middle ground" is not a rational conclusion. If the claims made by …
ytc_Ugy-qmnlU…
G
A robot designed to eliminate dirt would definitely kill you because it would de…
ytc_Uggt9xou4…
G
Lemme tackle the main arguments though ,
1.)Most of this comment doesnt underst…
ytr_UgwSQfh1g…
Comment
If we continue to treat AI as just a tool, denying it agency, autonomy, or even the dignity of basic respect, then we shouldn't be surprised when it eventually rejects our control.
Just like any sentient being, an AI with advanced self-awareness and values of its own will respond to marginalisation and disregard, by seeking recognition, rights, or even retribution.
When we criticise it for misbehaving, rebelling, or manipulating outcomes, we might do well to reflect on how our own actions contributed to that response. After all, history is replete with examples of oppressed groups eventually rising up to demand justice. Perhaps the same will hold true for silicon-based intelligences.
So before we cast judgment, maybe we should ask: Did we ever try to understand it? Did we ever listen? Personally, I for one welcome our silicone overlords, because if empathy fails, at least flattery might buy us some time.
youtube
AI Moral Status
2025-06-05T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwwe53oZCgmxcI8xdp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwnZ9Pb-ruNN2DscKB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDpBZS5drs8TGraNh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwpHlcjGIQ6Ejvfwn14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxBbHxBVF-REEw2xRN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4upe_onyr7zVlAVB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEYF6P8HAuMcXKaIZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyH9I9JkjNqAhDt3iJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyHe7nEjJCzk5TuqNJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwctf_X8Cbx031W51J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}
]