Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is it too late to cease Ai?
Will everyone be… laid off by Ai? Suffering Ai jobl…
ytc_UgzoXZpBW…
G
Alex: so when you said you were sorry, you weren't actually sorry?
ChatGPT: Tha…
ytc_UgwVq2kfM…
G
The real truth is these companies are offshoring the jobs that require a colle…
ytc_UgwIjQ7HI…
G
Please people. Is not making funny about the outhers misery ! Today is him tomor…
ytc_Ugymvdot3…
G
@maxdefire it can possibly happen cause tbh science and engineering are objecti…
ytr_Ugyr7W1Yn…
G
We're glad you enjoyed the video! Sophia's insights on wisdom and learning are t…
ytr_Ugz8Rlkh3…
G
And rolfcopter, with aqll this no AI startup can create intelligent Artificial I…
ytc_UgwYo4SiX…
G
Unfortunately, I think you are arguing the wrong point here. AI art is real art …
ytc_UgzquolJP…
Comment
/u/ComplexExponential asked:
>4.And continuing on my above point, if will is seen as a programmable phenomenon, then can a genetically engineered being without a will be considered exempt from all ethical considerations? And if not, then doesn't AI deserve equal ethical considerations?
Something that doesn't have a will is probably not a moral agent. If so, we couldn't hold it responsible. However, it may still be an 'innocent' threat, like a rock on top of a building that could fall and hit someone on the head. To that extent, we could take measures to protect ourselves against it.
reddit
AI Moral Status
1487175073.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dds1kol","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_dds1ott","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"rdc_dds7hhz","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_dds1oe9","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"rdc_dds3r39","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"}
]