Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't understand why everyone tries to complicate the issue. The issue isn't if we get it wrong and AI skynets us, because then the t1000 comes and kills you quickly. The issue is if we get it right or mostly right. AI isn't like any other technology because it's fundamental purposes is different from everything else. AI is an alternative to human intelligence, the thing we get paid for. Why would companies pay money for people who have babies, need time off, sue when wronged, and so on when they could just buy intelligence to do the same job but better. The argument that "AI will never be able to do that" is completely stupid. Look at all other technology. Look at our phones. Not only did we end up always having a calculator on us like our teachers said we wouldn't, it's also a super computer compared to what we had on our desks 15 years ago. Bottom line, AI WILL close all of the gaps that we see today. It won't take Terminators to kill us, it will simply be the fact that we won't be relevant to the work force. We won't get shot to death with lasers, we'll starve to death because no one will need the majority of us anymore.
youtube AI Moral Status 2025-11-04T18:4… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwYMbnDM2VGwox_aOJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxYOycNbQ4IvjMtNMV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzJTMhPNMrUXQM_uaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxrgXxoJVinsyrkMsV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwbeUYUR7QWsH2Lz4t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxG5K3a3A25sztovYJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyhsNprkIeiJxp3n3x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwLBIjK-OVx0rpwkFF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzuvROVe4G8ECnVlfd4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyP_aecgiga2Y5SZLt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]