Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wow, 10 comments in and I swear no one read the article. I was just listening to NPR talk about this hearing, so this was a good read to accompany it. He’s talking about fears of how AI will devolve society in ways worse than social media. Instead of creating the tech and then realizing the harm, he’s asking the harm be evaluated now and safeguards put in place. His specific example is the upcoming 2024 election. How AI will easily manipulate people, deep fake videos and sound bites that can be created with just a few minutes of input material. Bad actors at home and abroad can easily target and influence voters with hyper targeted content. Trust in society will breakdown. And what happens if a society loses all trust in its institutions?
reddit AI Harm Incident 1684277331.0 ♥ 45
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jkff6n3","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_jkfq0gu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_jkfgf9u","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_jkf33bw","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_jkf83ix","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]