Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Idk I think those corporate class jobs are actually future proofed against AI for a long while, precisely because it's so ambiguous what exactly they contribute to the company. If you can't define what a person's core task is, it very difficult to quantitatively demonstrate that an AI can perform that task better. Now you can say "well that will just prove these jobs are bullshit", but we largely already know these jobs are bullshit and that has changed things exactly 0%. If however, your job is to write X lines of functional code, or write X patient chart reviews, it's very easy to demonstrate that an AI can produce 15x the amount of intellectual product in the same time frame. And then your department collapses very quickly into 1-2 people managing LLM outputs.
reddit AI Moral Status 1751006508.0 ♥ 38
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n00tpvl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mzz751n","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_n00e942","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mzznir3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_n00o84k","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]