Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is, if you truly want AI to understand things. A few large problems arise. One of it being the nature of learning itself. What is learning and what does it mean to actually learn something ? It might seem counterintuitive but out brains are a lot like prediction machines too but we can learn and used the information gather to predict wheather we are right or wrong. If we start building true learning inside an AI , we don't really know what would happen. Nobody can really predict what an AI that is actaully smart and now just a tool might do. I call it real AI , the current AI is more or less a machine , or a tool. It does not have a brain or a process thoughts in a similar way to our own. And if you keep pushing this, you will eventually build an AI that can probably think, but a thinking AI might even be dangerous. We currently dont even understand the nature of conciousness itself.
youtube AI Responsibility 2025-09-30T17:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzeyCz0nbolpXO77Td4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7j1Hs_3R6h0EdJRd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwcUkKdabbZImnqxqB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwngrD_WgTjpVtYWmd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyakf3WnNNRCorRkUN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxr3U04ZTXAasnQsbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyEuzbRgMOcIIUUgD94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx6wmDcFOqHM8JSTxJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRkUqdFD-v9ouQA6B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzSbX8FJvnbN78Oo814AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]