Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans aren't in alignment with humans. How can a more complex non-human intelligence be in alignment with an unaligned species? Alignment is an illusory palliative. There is no safe way to create AGI, or even to empower AI to control our critical systems.
youtube AI Moral Status 2026-02-25T01:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgysrjHpVimz086QbCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyF1dGxOdykk0K6Q1R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx3Jo465wNYBo9mDQt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy_wYmIZJOOrUNBzWJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxR9YVNhoC45Wy-6GV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxC4BhaIHgFruH6jQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz7NzAfjucI8wJzGpR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmmmWN0g636cY6fH94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxOAENRbn-VfIMgwE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzlHX1nKQhVZ-o-MjV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]