Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To be fair, we're already hitting peak server loads across the globe, hardware isn't getting better as fast as it used to, electricity and buildable space is getting more and more expensive, etc.. We can make AI better (hardly faster, mostly likely slower as it gets better and has bigger datasets), but scaling it to mass-civil use seems questionable in this decade. EDIT: I'm getting around 5 minutes of processing time locally on my gtx 1080Ti + 13700k/32GB DDR5 if I want a solid 1080p non-complex prompt image on Stable Diffusion 2+Midjourney. Shit's really resource-expensive. And yes, it's extremely easy to generate CP. Please, don't ruin it for everyone. "This is why we can't have good things"
reddit AI Harm Incident 1695596314.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_k20wf9x","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_k227kr1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_k20obua","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_k22895j","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_k1zu4uh","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"approval"} ]