Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As discussed in the video, sometimes our language just lacks the vocabulary to talk about certain thinks, and maybe what I'm saying here is a lost cause, but I really think that it might be a good idea to avoid terminology associated with human behavior and psychology when discussing how AI works. Like, instead of saying that an LLM "cares about X", I'd say that X has a big influence on its output.
youtube AI Moral Status 2025-10-31T04:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxV8vgwmKcDgMum4w54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwG15S7YkMb3DLuvjF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwI7HSH8iftaBPJmzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugza6nUEuU0Jm_HnM0F4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzyw6_2xAt_gL-E9Mt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTanRaGZXmnFBTU194AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx3yjOnNHM-JUI6YIR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxZm2WJibEPTyCvE1x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugznv2d0fWWUmHT9fs54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuabJJ9Dxri4gCwjt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]