Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2026: His prediction was wrong. AI can make a mid level engineer more productive…
ytc_UgxEeGKwZ…
G
That's an interesting perspective! In the video, Sophia emphasizes her continuou…
ytr_Ugzq3UhrE…
G
Unfortunately those type of people who believed AI more than physical proof will…
ytc_UgzG5vgnt…
G
Not a remedy but the solution/answer that makes sense because we are going to co…
ytc_UgyCRxbaW…
G
tried a bunch of tools like these... DarLink AI is the only one that combines gr…
ytc_Ugzm1eNwR…
G
Folks don't care unless it directly affects them unfortunately. I saw an AI "ar…
ytc_UgwINW3kE…
G
We not there yet. This is the Atari 2600 of AI companions. But considering there…
ytc_UgzyhB_kl…
G
There is no ai art for me. It is just a really really good image generators that…
ytc_Ugy00g7ea…
Comment
I mean if AI ever reaches the level to where it can choose to harm a human, like on a level at which it is doing it for its own personal reasons, we would probably be pretty fucked. I don’t think the legal system would particularly stand much of a chance in a Skynet scenario.
reddit
AI Moral Status
1524965344.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dy4e3bg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_dy4ftoz","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"rdc_dy4phxw","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"indifference"},
{"id":"rdc_dy54eq6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_dy57k0p","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}
]