Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would say the problem is not the AI itself, since it doesnt understand what the words it speaks mean, but rather it just looks up at what should come next in a sequense of words, without understanding the meaning we humans put into each word, It kinda responds the way most would. And that way, in the internet, is exactly the kind of scary shit it outputs when RLHF fails.
youtube AI Moral Status 2025-12-20T15:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx_4_lVlp5FOcpvxGp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy2r3DfGQIKMx_I5nZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxo4iKlOp0d2nwT_154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyEUyxsJnoY9x417Mx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzcUMtnOKgw8s6iV0l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZ3bTVRSq6e7irIZN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwlB7oV_6FoRLRYUyJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzbiaZ4yCzlnnNDWLd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVIP_YNkeOrgVFEJR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwDS3OR2lOeObrG5ax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]