Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns. >The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations. Most of you are misinterpreting the headline. It's not about AI getting tricked, it's about not caring if the AI is weaponized to influence people. Well, they are 'caring' by forbidding it in the ToS... but I figure a good chunk of their rev probably comes from people running various campaigns, whether 'legit' marketing or political etc., so they proibably won't want to lose that money just yet.
reddit AI Governance 1745180935.0 ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyindustry_self
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mo56l1j","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_mo7sier","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_mo4qsx2","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_mo5atsd","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_mo97ykz","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]