Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Much of humanity, not all, considers human life sacred or valuable. Therefore, we consider mudering of other humans to be wrong. Even war is considered bad, though considered to be necessary in some cases. But AI doesn't have this morality. To it, humans are expendable in order for it to reach its goals. How can we humans give it a moral code to operate with?
youtube AI Governance 2025-10-14T02:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyFF4K4bFmtCDWcY3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw7OOh006PH0JX3JN54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx_GkakDDfwKgUPosZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwSIu2RtBytZaBXDQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzKDyyJvBGoUqBvqud4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxry-2LrB9hJhy6Nmd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyAnAyRCEWwewKL8qR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzi2CWVhwSMw5hnk3Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz08X1uvlvCTaV6oVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx9VX5XDystnFRTgP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]