Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You know it was the case anyway. Garbage in, garbage out. The speed of implementation says that they built on top of something existing and not that it was built themselves. An LLM is not a general AI, so by it's very nature is going to make severely stupid mistakes. If all you care about is providing the illusion of support , then you can fire your support team.
reddit AI Responsibility 1689210565.0 ♥ 44
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jrqv9gj","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jrqfd0p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jrrr8f1","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"rdc_jrs5tlw","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jrrvenj","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]