Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are a lot of different real risks of AI. In the near term, you have automated mass propaganda, sycophancy leading to AI psychosis, lower barriers of entry for bioterrorism and cyberattacks, displacing white-collar workers into unfamiliar industries, new kinds of mass surveillance, dangerous power concentrations from automated weapons, climate change implications from the power grid build-out, etc. Longer term, depending on where the progress plateaus, we potentially have technological unemployment from AGI, technofeudalism from extreme wealth concentration, and existential risk from misaligned ASI. Those might sound like science fiction, but if you listen to the top AI researchers, a ton of them are very seriously worried about these more speculative risks. We shouldn't dismiss that blithely. All that risk implies the same thing: we need to regulate AI. It's a powerful thing that can do a lot of good in the right hands and a huge variety of harm in the wrong ones.
reddit Viral AI Reaction 1777067570.0
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi04r0i","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_oi23gdb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_oi3ce0q","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_oi3kgcf","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_ohz39ma","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]