Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I disagree, any artist will be influenced by different sources ie. art history, …
ytc_Ugw1nBxyK…
G
Jesus Christ here we go. First AI scam and it's gonna get long worse from here. …
ytc_Ugz86Kfba…
G
i'm very familiar with logistics and supply chain - this story is matter of fact…
ytr_UgxL9AvQp…
G
Can AI operate without electricity? Well, human can operate without electricity …
ytc_Ugz7tKEtD…
G
a Computer Science professor with an Apple mobile? Come on. Yeah, Ai is useful b…
ytc_UgxUecZ1_…
G
I think everyone, regardless of gender, should start wearing Burqas and Niqabs. …
rdc_g4pd1nd
G
I have used ai once when making art, and it was becouse I needed to find a natur…
ytc_Ugw6Dank6…
G
The enire video... building up nervousness building up fear until finally the en…
ytc_UgxWBvZYg…
Comment
I don't see a review of, or engagement with, current AI security literature.
At the moment, and I am optimistic about AI, we are barely able to get it to drive a car safely. You're asking where it would like to go for the weekend.
The moral status of AI may become an important question. It is not the urgent question now.
[edit to add]It's only a paradox if you conflate the two questions:
How do we ensure AI research doesn't accidentally build SkyNet?
How do we protect conscious beings that just happen to have silicon souls?
You are not exposing a contradiction in AI safety. You are smashing a control problem into a future personhood problem and claiming the wreck is profound.
reddit
AI Moral Status
1775188764.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]