Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For conversation, even if you think there's not something inherently "dangerous" with the current state of AI functionality, let's say all industries pause the research and development of AI anything for a few years... When do they start again? What are the specific conditions that make it "safe" to pick it up and do it again? And who approves this starting up again? Or can someone or some group force someone to stop? You just know that even if this pause is enacted, the only ones that will adhere to the pause are law-abiding, ethical people. Other will still continue on with AI dev. Historically, the better thing would be for leading companies and researchers to build a consortium to develop guidelines for ethical development and use.
reddit AI Governance 1680105254.0 ♥ 6
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_je5jwe4","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_je5fl74","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_je3kd6i","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_je4izhq","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"indifference"}, {"id":"rdc_je4l2ye","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"} ]