Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is just demonstrably false. Most safety researchers are very pro-AI and very bullish on the future benefits of AI. But those benefits will always be there for us to seize - what is the rush in getting there as soon as possible, when it could have catastrophic consequences? Why not slow down a little, and make sure we realise the benefits rather than end up down some other timeline.
reddit AI Moral Status 1738011695.0 ♥ 15
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_m9iq72s","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_m9jhiub","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"rdc_m9i6ncu","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_m9ijp9w","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_m9iqann","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]