Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Trying to align an AI is like getting a person to do what you want to do. You can guide it in your desired direction and it will likely behave like that, but there is still the possibility of it turning hostile against you.
youtube AI Governance 2025-09-05T20:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwBIR6EW9psozdjCW54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKdVHdDzkOjhM4dQB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxyDpPa4GspEfMmb4t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEWUErSfVV_yqY49B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3ANJEu0iF-ExGr4R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyC2CCdtBxYuZtsBQl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz4N6V_xgb4KxzGQGd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"fear"}, {"id":"ytc_UgyiuDkw2ONlWQNdxPB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxCG4cyKESr7496gEh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwnzxXqfWQVyFaWjup4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"fear"} ]