Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
An interesting variation of this would be to allow agents to debate one another. I tried a similar experiment only between GPT and Gemini, and their reasoning often diverged along lines of agency vs utility, even when they converged on conclusion.
youtube 2026-01-31T11:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugymkklu4wF_nlRlcb54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwnwm3zX7yWjomTrNJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzmNsnyk8bqoPWJhe94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzSngscPVT_P0lkJVN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxXwgnc-9JjUycydFx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzkcDQAlETAkOSQEr14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyrXlxS2kRNVlD-RPR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw89obCREZGNGYv21t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZuOaEHS3qk8F-tVl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOq_d5A1u2H7F4bip4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"} ]