Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's idiotic to imply intent from an LLM. The training data comes from human to human interactions where emotional content can be extracted from every sentence, including self-preservation. Of course the LLM will simulate that too.
youtube AI Moral Status 2026-03-02T04:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwksWU7Yt_8Sg2YXah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOAD4qiJukiw70jSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxX8IB49EqdwRALIC94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgymXjgx53-rSyODUp54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy7D_kMmjKSiTeuSvV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"unclear"}, {"id":"ytc_UgwTnYg_Dok9I7AxJ4h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRl86tEGIR3MsiIZF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZLerip2EVmiClYTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_UgwNsb_ZsKUnPGo71wl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyG99URthmp6B5WlbJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}]