Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I havent heard of anyone who genuinely thinks AI is good for our society so how …
ytc_UgzmLSH8v…
G
I think things will continue as they are for another hundred years, as technolog…
rdc_d3xegkb
G
So we need a robot for everything. What an idiot he even said he had thought abo…
ytc_UgyXMzmaT…
G
Not to worry, Christian! ClickUp AI is being developed alongside ClickUp 3.0, no…
ytr_UgwV9OgTL…
G
See [https://www.fastcompany.com/91039401/klarna-ai-virtual-assistant-does-the-w…
rdc_ksku5kz
G
AI can be many things, but it lacks motivation, I wouldn't worry about pure AI t…
ytc_Ugx33hW5G…
G
no only WOKE ai or any AI with one sided political views or social views, amd wi…
ytr_UgwI-TX7V…
G
Instead of trades, we need people building robots. Take your tech skills and ups…
ytc_Ugws2nD4j…
Comment
Its hard to find a human responsible for an autonomous thing.
Is it the robot programmer, the CEO, the owner of the Robot, the last person to give a command, or the robot itself?
Thinking about it...we already have autonomous systems that are a part of society that result in many deaths of people directly and indirectly where no one person can be held responsible. For example, systems that decide the distribution of resources, or government laws.
When we imagine an autonomous system capable of killing, our imaginations anthropomorphise and jump to Terminator or Big Dog looking robots. Yet those will likely be a minority of robots. What about the robot operating the water treatment plant making a mistake? What about the programming that would control road traffic causing deaths?
We need to be able to answer the morality with these abstract things and see the big picture if we hope to also answer it for humanoid-ish robots; a smaller sector of the autonomous world.
I suspect that once we begin discussing this we will realize there is no where to point the finger. But is that an issue? I hope we can also ask ourselves why we want to point the finger so badly post hoc when studies continue to show that punishment does not prevent crime, while incentives do. We are so used to hitting people with justice hammers in revenge that we feel lost, afraid, and confused when there is no where to point the finger.
What I think autonomous systems need to keep them safe is a system of responsible incentives...the same things that keeps governance and economic systems operating correctly. Lets spend less time trying to figure out the flow chart for finding who to point fingers at and more time questioning if the incentives and power distribution of society is creating a safe environment for autonomy. Autonomy is going to be more of a risk than robot crime, its going to massively unemploy and relocate power and incentives. Responsible environment and incentives is the key. Tryi
reddit
AI Moral Status
1429545225.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_cqjacgd","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"rdc_cqikxcw","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_cqj083t","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"rdc_cqisk6b","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_cqipxsl","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]