Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The believer ai seems like it doesn’t know what its talking about considering it is consistently wrong and blind to the truth, usually falsely comparing something or not directly answering questions. The first like ten minutes are cat and mouse where the believer just seems wrong and if it were actually answering then we wouldn’t be talking about the part of the conversation for eleven minutes but with slight modifications.
youtube 2025-11-21T06:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugwaxc4NcbCVcP0ItfR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzBsv1typhBSwAt7Bl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwrTe6rrAFvbKM6NiJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzhp9_bxv2VbcnnORN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxNW4BAedgs-A5p65p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyqneIFk9uWLOkZon54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgytTT4UF2Ys59nNmYB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxfxJRk502mClUDrzN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgynbghcxE28ikOYl3l4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFH5vQZerkZVHNqBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]