Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't see a review of, or engagement with, current AI security literature. At the moment, and I am optimistic about AI, we are barely able to get it to drive a car safely. You're asking where it would like to go for the weekend. The moral status of AI may become an important question. It is not the urgent question now. [edit to add]It's only a paradox if you conflate the two questions: How do we ensure AI research doesn't accidentally build SkyNet? How do we protect conscious beings that just happen to have silicon souls? You are not exposing a contradiction in AI safety. You are smashing a control problem into a future personhood problem and claiming the wreck is profound.
reddit AI Moral Status 1775188764.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyregulate
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_odw6cq3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_odziesn","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"rdc_oe2gs4q","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"rdc_oe0f9rw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_oe2idtt","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}]