Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is interesting but probably not really targeting the right areas of the industry. Only applies to companies who have sunk 100m into training their model (or more) or have a model using a certain level of compute power. Both are kinda dumb metrics to use BUT more importantly this doesn't consider what I think is the way more dangerous side of this industry. All the little AI start ups that are simply wrapping shit around GPT or Claude API and then calling it done. I know of several that are doing this in the policy space (yes, like governance, laws, etc) who also have zero understanding of how to properly mitigate hallucinations or properly give info with sources so that someone using it can easily cross reference. The downstream companies will be the one doing serious harm if left unchecked. And honestly this is coming from someone who has started an AI start up, I'm one of these downstream companies and I can easily see where the dangers lie.
reddit AI Responsibility 1724505240.0 ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ljq2tqi","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"rdc_ljp7e7k","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_ljrp1wi","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_ljrqxsu","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_ljq7m32","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]