Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
it makes you/me feel how much we fail, but to honestly say, that you or me is able to judge or measure, if we are actually failing on a full scale, is something i don't believe in. doubt ur own criticism. doubt your own optimism. mix it all together – and you're somewhere closer to the whole truth. What i mean by this cryptic chaos-writing is: it's developing – in to bad directions (obviously) and into good directions (obviously). But to realize this, ai didnt even need to happen, that was clear before. So it's not that interesting to me to realize "oh we fail" or "oh we succeed". Me, as an individual, who is not working on ai-studies or similar fields, I'm just here to analyze specific examples where ai is "bad" or "good", "usefull" or "counterproductive" – whatever you wanna call it ("fail" / "succeed"). I try to picture a whole AI-critic in my brain, what i think about it and so on, and the more and longer i do that, the more i realize that it isn't possible (yet), doesn't make any sense, to make a judgment "about" AI. Much more it makes sense – again – to target specific examples, where ai is being used in a "good/successfull/..." or "bad/failing/..." way. That is where we, as an individual, can have an impact. By constantly praising or generally bashing AI we just sabotage the whole process of human progress. That's my most general POV on the whole Ai topic.
youtube AI Responsibility 2026-03-05T13:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgwUUU0FJr1qb7YP1-l4AaABAg.AR2_VjTelVKARB0v3DUPdn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzG7dbDUeGOQZdHAJV4AaABAg.AQppNN1Uj-BATym_qs_cHa","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxEI3fKyCOLXnd-3a14AaABAg.AQ2GcB6PKP9AQ2HdgSDy6r","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgwGMvDU00_X8Tfk2794AaABAg.AOrqT_6G3_JAQyX4AIA5aF","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzLD8dm2UO2ax5PMUp4AaABAg.ALFzRlAhKL6AOBv9u3DbGX","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgzLD8dm2UO2ax5PMUp4AaABAg.ALFzRlAhKL6AOCSQoU3yC8","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugyu9-hAWphi7g35oUR4AaABAg.AKFoAwqTAFQATa7uPmgfVy","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytr_UgwxULmQdzOA0lqwB9B4AaABAg.AIth_F0MizHALYmIcI-uiS","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyxYCyu1kR3N4-Hip94AaABAg.AIli_xiOogkAJ6kwRXkQ2B","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxx1QxRAsLE9FI4mkt4AaABAg.AI0HEwE5S0xAIRPWpURcFA","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]