Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let me point out that there is no intelligence in “AI”. It just reorganizes whatever it was trained on into whatever it assesses to be the most statistically probable outcome. It doesn’t think. It doesn’t strategize, it doesn’t think, it doesn’t reason. It just calculates probabilities and tries to give a vaguely coherent answer.
youtube 2026-03-29T12:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxNSe8M5vOnCxZje6d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy-BGsWBAq_1mYhv7F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwDhs2QDvg7Dw3Guh54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgywveIekNJhP1A4flJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzkj_O888dYEzbX_ax4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyBZ3xA_C6qq3HtiQl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxjzA5ptZyphfCXDld4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxlj1DqyqRwcAP-IW54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzsXliZ-pjgXkfbfeB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxAvdjTupjaWAVDU1p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]