Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem I see is that even if we are able to align an AI with human needs and requirements that is not enough because we would have to ensure that every AI was fully aligned. And that may turn out to be impossible for geopolitical reasons. I can see the industrial military complex of many countries being happy to create AI's that are not aligned with the needs of all humans.
youtube AI Moral Status 2025-10-10T13:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxGQUmIpLeFlVIDfUt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgypjQqT3e-Zz3saSR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzAtvqbmQte7oFZz7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyveMc7u-7ne9DBptZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_Kf8CwqOhrrLwZnh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxfruevPX7EW-ohjCt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVUIvqOJ-FpeC_iV54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwr6eTh2pBwRoE1BxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZhBD0hBQfAb5GL4p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgynphV8hjpJa61EQld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]