Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will do exactly what is programmed to do and nothing more. If is designed to shoot people with tentacles sprouting out of their head, it will shoot people with tentacles sprouting out of their head and no one else. Can it misidentify? Sure. But due to design specificity and the level of camera technology we're at, unless there's a full-blown corruption of the algorithm in target identification, this is less likely to happen than human error. Like when people say they're weary of the future of self-driving cars. Do you have a 360 degree camera in your head with UV and Thermal vision and can react in several milliseconds? The only way we're getting Terminator-level AI is if we specifically make AI that can prioritize its own well being over a human beings life, and act violently toward that outcome. Sure, some mad scientist could do it, but it's totally unnecessary for ANY application. Your printer needs to print. It doesn't need to ponder the philosophical reasons why printing your ass cheeks might be considered rude and unprofessional.
reddit AI Responsibility 1648739428.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i2ud2rn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_i2ue5h6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_i2umohl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_i2unzje","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_i2utzrs","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]