Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The gamble is that it *won’t* be shitty in about 3-5 years of time. It’s not yet known if that will turn out to be correct or not. Current signs say… it’s possible. But, that is contingent on hallucinations being minimized to the point of having better-than-human error rates. It also seems contingent on companies like OpenAI and Anthropic having continued access to copyrighted data online without consequence. If one of those above conditions aren’t satisfied, the AI bubble will pop. By then, I’m sure Wall Street investors will likely shift focus to quantum computing.
reddit AI Jobs 1743022983.0 ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mjv8s61","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mjw22wt","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_mjwgzbi","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mjwlckv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_mjww0wm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]