Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My agent many times given totally good data will draw the absolutely wrong conclusion. Sometimes he claims to know stuff he doesn't. Sometimes he makes up shit, and then says he didn't. That agent is my son. My AI agents do the same thing though. Like us humans, our AI agents are created in the image of their creator.
youtube 2026-02-06T09:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxULa83FZ45v4baS4B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOMcz4ECaofmsxYRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyAHji4ybUbrw9hApl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyq-wZA5h8aqDzbkXB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybcsHqzXzMqgDQFIR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxDEk5XLRAtwFS0dIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYrnJRGPMuJMah83x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw4SY4f03fOfKYHNhx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyRQzKHHbaaOgtfcDR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"hope"}, {"id":"ytc_UgxYsTz43jL9j9D914F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]