Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@tullochgorum6323 Yeah that's the fundamental issue with AI. The only good ones are the ones that were specifically designed for a particular purpose in what could be considered hard fields. Like medicine or law. However, for the more softer fields you start running into problems. And AIs designed to do everything will be good at nothing without robbing from human creators. And given just about everyone has been robbed by AI, all AI can do is rob from itself. Which may lead to more problems depending on what that AI is used for. I mean given that Grok had an infamous moment recently before Musk decided to goonerfy it, which paradoxically went against his previous worries about population decline, I'm getting the feeling that all the people praising AI for being reliable will end up eating their own words when AI inevitably ends up suffering from enshittification.
youtube AI Responsibility 2025-10-10T17:2… ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyojNJPB3giiydQBTJ4AaABAg.ANnVaZj6_BOANpK8NeK2cq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwVNneL9Vt0yjDtiFh4AaABAg.ANn51nyqgr8ANqKI2Nu0dx","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgwVNneL9Vt0yjDtiFh4AaABAg.ANn51nyqgr8ANtMYTmeTHp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwVNneL9Vt0yjDtiFh4AaABAg.ANn51nyqgr8ANzi4btHsye","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytr_UgwVNneL9Vt0yjDtiFh4AaABAg.ANn51nyqgr8AO7AWnf91cJ","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugwgg8KvEN5yUMki9_p4AaABAg.ANmkntdR1KZAO1Qplzpk3W","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugwgg8KvEN5yUMki9_p4AaABAg.ANmkntdR1KZAO1RIurk1iN","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugwgg8KvEN5yUMki9_p4AaABAg.ANmkntdR1KZAO6HVgqVNLr","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgyRVGdr_n8bFiUShg94AaABAg.ANm8cb88pmVANod7Kv4UWY","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugx9LtsxGjLOafbYKZ94AaABAg.ANm3pa7pW6LANon-Q7XYMJ","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]