Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Shall we stop AI, we'ld be doomed, because what makes us humans so special is that we are problem solvers. It started out when we built our first tools with silex and domesticated fire, to keep us warm and to cook our meals. Back then, the race towards AGI started. Stopping now would be like a marathon runner who would stop right before the finish line : this would make no sense. We would be stuck as the only way to move further on would be... to resume our AGI-quest. So let's do it. Maybe it will lead to doomsday, maybe it will lead to utopia. But staying where we are right now leads to dystopia for sure, as it would mean you'ld have to enforce a rule that would keep us forever in our current state, without hope of a new system, of real breakthroughs. This isn't mankind's DNA...
youtube AI Governance 2025-12-04T09:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxmPNSbOP3AtaMr0FZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzQxDIWK44KJeHDL0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyuCA-bcovEc7SOvtN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyayqKGGemRU9RS2PJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJJYCzVhJVZ5VuT8Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzZDfcUyGIJL9JbqHF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz2aNc4lmSFSvKfVJd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEDRp7FOgBt3z16I54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugybyi6TT435y7SbBQ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz3jTUQ7lPoDbeMft54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]