Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah exactly. Almost every single post about Google or OpenAI trying to capture their position rarely leads with this point and it is THE MOST IMPORTANT ONE. Every single one is "Dude they are just trying to stop progress!" or something along those lines. Despite Microsoft and OpenAI's obvious motivations - **Can we please at least fucking acknowledge how insanely dangerous these technologies are?** * We are standing on the precipice of great change. The ushers of this great change are telling people who are about to jump that only they can supply parachutes. This is of course nonsense. The answer isn't to then declare "This is nonsense!" while jumping off without a fucking parachute because you will die. You will splat against the ground travelling at terminal velocity and you will be dead. *If you don't know why these technologies are so dangerous, you probably need to go and do some real investigating before you leap into a conversation about it because the potential danger is unlike anything we've ever come across in terms of how we think societies work or should work. It is a major threat to everything we think we know about ourselves and if we aren't careful it could cause havoc that we might not be able to walk back from. As Rob Miles said - there is no rule that says it will work out for us. Yes - the technology is going to be hugely beneficial in many, many ways... that will take care of it self, the negative will not take care of itself.
reddit AI Harm Incident 1684281992.0 ♥ 44
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fvw3b2g","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_fvwggyl","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"rdc_jkfb78i","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"rdc_jkfhmon","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jkfpcvo","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]