Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm sorry but dr. Neil DeGrasse Tyson stops when comes to medicine but then rolls right on when it comes to AI safety contradicting AI researchers. AI alignment is not solved. AI safety is still struggling. And that's even before you address perverse incentives of capitalistic pressures on that development. And how that exponential growth is really not the right model given limited data while mentioning that about AI generating something new despite that being an actual possibility because AI doesn't only interpolate it can also extrapolate. Will it be good ... not now and likely not for a long time but at some point based on previous behaviour and data it may build coherent enough model of a person to extrapolate quite well.
youtube AI Moral Status 2025-12-27T13:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxgmYNz6dGKIANqelV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxEFBq3icK9DpFgqF94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyhgWWRmytnZkqQ5EJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy-JSf5vA0qn863SnN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxMQWrXmWpTIH4b6yR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz3SZ8T98sdWPaVoOh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQIY4RUrAGKNKC1vh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx_aJNFggqB0G66m2h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgxmoqSGt9Tl3o63zNF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyhpsPoW2aaMaZHlzN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"} ]