Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've come to see this as a question of human ethics. AI, in itself, doesn't care about how a solution is reached—it just pursues its objectives. The real question is whether we trust ourselves to make the right decisions. And if things go terribly wrong, would there even be any humans left to learn from the mistake and try again? Personally, I agree with Yudkowsky: we need to pause and develop a backup plan, just in case things take a bad turn .
youtube AI Governance 2024-11-15T19:0… ♥ 2
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz2lXGfS_ZGJj9-YRF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy2NGjHaZGBOenSIw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx_FBWRh4PutrwaqO14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxjmS4ChAa1pC5LFA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxtW4A2eZ_Kd-iRtEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyBjocIVbzse4SIKnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwEVsaJmnKawzjR80d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx96Wtzml0uoIE4YUt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyk4FRlgO6zrHPiwaR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwDjH707sp2_LFcPOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]