Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the alignment problem is fundamentally unsolvable. It's basically an extension of the halting problem - you cannot predict the outcome of a program except by running it. All we can do is run tests and make guesses and assign a percentage, and AI is always going to be more patient and careful than we can afford to be.
youtube AI Moral Status 2024-03-16T17:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzfvFuZ76W8WrJ4ldh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx1YtvmJBGyxa7xN1x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyvO3iXf7sBGG0aLqt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugy1ylKx1NFwIfB0N8l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwhxMf1nWDbFh17SOV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugzb8V66eQWin6DZxBt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgydRodPqlBB2A_yaBN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx2E-ouNJd783sJGot4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwsTVUkerQBpvCp-Yd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgxCzX4k94XMwtMmLfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}]