Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why no mandatory accounting in high schools since 1950?
Have Economists figured…
ytc_UgxmfiHt3…
G
Being a truck driver is not a healthy lifestyle, at least for long hauls I think…
ytc_UgxKqSbZ6…
G
Well is it bad? Id rather say that it depends from what you are saying ai to wri…
ytc_Ugwq_JNRA…
G
Individuals *need* tech regulation, esp. w/AI, privacy, hacks, data centers, wat…
ytc_UgwIrqVmi…
G
Solar flares and sunspots as in Electromagnetic frequency and electromagnetic pu…
ytc_UgzGGmj7_…
G
@inquizitive1 Right now I'm working with framepack and ltx video in comfyUI. I'…
ytr_Ugz39fpLw…
G
If AI is us,does that mean we are doomed? Will AI simply amplify our weaknesses …
ytc_UgzCnSRs9…
G
Bro if I ever lose to a fight against a fucking robot I will be embarrassed and …
ytc_Ugwl9vcbH…
Comment
We need to make robots feel empathy like us. If they didn't, they would kill us.
If we do that, we can't treat them unfairly. I say we make all robots feel pain, pleasure and empathy. If a robot is built for, say mining, we don't give it the ability to feel the pain of intense heat. If it was built for being a teacher, we don't give it the ability to be annoyed by stupid, bratty kids. We only remove one bit of pain for each job.
Oh, and robots should feel happiness when they fulfil their purpose. If we can make robots feel happy, then we can tell them when to be happy.
youtube
AI Moral Status
2017-02-24T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugg7JvT5Ke9_Y3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugjp4atLRhJUd3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggjRqdxE5U2-ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjfKgT77yIRgXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg4TuIQPSKXyngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjbVdE7EsFa9XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UghsMX_rPl0ZH3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggUQCGmIZf1bXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjYaewyXWwmjngCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgiW2xFap75PT3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]