Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Even "state of the art" AIs like Gpt and stuff can probably be simulated like this with a fairly good accuracy. Beyound what a non-trained human can do anyway. Cause it's about human behavior which follows more or less the Colatz conjecture, not the AI. As in, when you analyze AI data, you find it follows Colatz cycles.
youtube 2025-10-09T16:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyUaoyh8czSF89hLhx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzu1gpdR_-npH2vYLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcHY8BcjR7jFL3Vv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxa_AI09CQopJ5muLV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyg8yzfa5uWfoXsN8h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyJQrVjX1jOo36RdNN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx1_MYtaUsboItfnzN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzNOMhiaDpdEAcNe7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy5inbXk_l14tlwLOt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzy2BdluU0LpgdUiWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]