Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't see why anyone thinks empathy is important or even relevant to the AI alignment problem. We've had empathy for hundreds of thousands of years, and it has done little to constrain our evil tendencies.
youtube AI Moral Status 2025-12-09T03:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyuk1hBtKCsoVIMlGV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxCMM_nHx7vx3CUi4B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzVveglUuOdEOPDz0Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxX7TxDaYQ34a1_0RB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxtOHpYkiOjd13ruUR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzysJ0DzXsAajmE7B54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwAYhjcm3oJXCmDcaR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxqlRre80WFcsF3yyF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzNGyLesXo-3GaVwj94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwRHE4F_qhgUGRtXdJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]