Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable,” said Dr. Yampolskiy in a press release. “This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort,” he added. As AI, including superintelligence, can learn, adapt, and act semi-autonomously, it becomes increasingly challenging to ensure its safety, especially as its capabilities grow. It can be said that superintelligent AI will have a mind of its own. Then how do we control it? "No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance,” he added.
reddit AI Governance 1708146611.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kqt81wm","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kqsyt6m","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_kqtsbbu","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_kr3eiya","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_kqspw3a","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]