Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I sure hope some sort of process is started to put some guardrails in. I'm really not worried about sci-fi scenarios around self-awareness any time soon. But there's a huge gap between that and something that can do nearly everything that we currently pay humans to do. The people I supported in my last job could have all been replaced quite effectively by something just slightly better than GPT3. At the current rate, my job could probably be replaced quite well in a few years here. Programmers and huge chunks of the IT field are on the chopping block. Content creators, editors... It's coming one way or another. It's not a question of if AI is going to get good enough to replace a whole lot of our workforce, it's a question of when it does, and how well regular people are protected when that happens. Sadly, everything I know about how this country works tells me that the solution arrived at will be whichever one maximizes corporate profits.
reddit AI Governance 1694539679.0 ♥ 15
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_k0eiuqs","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_k0b8zdt","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_k0ea3dt","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_k0a99mp","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_k0abvic","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]