Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The "what problem is it solving?" part from the CNN article is a good point. Because if it isn't actually solving a problem - what incentive is there for it to be used long-term? Because there are AI tools that actually solve problems! There is AI that one day might help recognize cancer (or other medical problems) early, thus helping with diagnosing and treating patients. There's AI tools that can help researches unravel the structure and function of proteins. AI tools that can help with developing new medicines. THOSE are actually solving problems. THOSE are GOOD, and they will stay. But genAI? What problems are chatGPT or DALL-E and the likes solving? Because the only "problems" I see them "solving" currently are stuff like "paying artists and writers for their work" (probably the biggest "problem" it's "solving", because gosh, companies really just hate us pesky artists wanting to be fairy compensated for our work, don't they?), "having to actually put effort into something you make", "learning a new craft", and "using your own brain". For me personally, there's also this point: why would I spend my energy and what little free time I have consuming something (a piece of writing, a picture, whatever) that zero effort and time were put in? Why would I bother reading a story or take time to look at a picture that nobody could even bother making themselves? I have just zero interest in that.
youtube Viral AI Reaction 2025-04-05T17:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy5o6rcQy_1Xj2aD0V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyi5etGWWKYveN23Jh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw_8s313fOOnhI3tYZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw15MKhF9GHGUvlOWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhmbOkR03mqUBYx-x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzTK-s9sWe4kue9Elx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwZwWgBO7FcQfvejGl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxAlaopOKXbU2sUYwF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwSvfQZiWa8JFS79H14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2_g4qpjzyqsTuHdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]