Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I hate that every Google search now generates an LLM AI summary, and often times it's outdated or wrong. What pisses me off is every search now generates power usage far beyond when it was just top results. How many tokens does each search results take now?  I hate that I search <major service> outage status and it says "yes <major service> is down right now" and it's just a copy of the top search results from 1+ year ago
reddit AI Harm Incident 1752888891.0 ♥ 126
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_na3b2i5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_nagw4gv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_n3x3nkp","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"rdc_n3x857b","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"rdc_n3ymtse","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]