Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AI doesn't actually prefer something over the other since it doesn't have such motivations It only learns from us an AI doesn't think it's behavior is only dependent off of what it learns and this is a lot about the people we are since it would find examples of these and then learn from it not knowing if it's good or bad because an AI like these have no concept because they cannot think in the first place!
youtube AI Bias 2022-12-19T17:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgysLwPQSpFqZKTm77h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx_kXmkaehDgOrBa5J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzhEwkrkJ16JioHJyZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgyoEbzYLggc9yMcvcx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwYzeeXocSySZGQ96x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugwk0FfPEoolcWET7EV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzUeSUL2K-080qwDx14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzLrWtzcu_-FLUwsG94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgwxOlOZTewYO7MJKD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwd6GJ1rYTRSBMIiI54AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"}]