Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My experience from the last weeks: I used an AI to generate an image of a black poodle for a meme recently. In the end, I had about five rather good options to pick from. And these five were the result of over 50 attempts. Some time before, I asked ChatGPT to create an image of a marble bust based on a photo. I gave clear instructions what I had in mind - the result was worse than that of a second attempt where I didn't give any additional instructions. Last week at work, I asked our tutoring AI to create language exercises for my student. The result made us all burst into laughter, it was flawed as hell. Some time later, someone who works as a police officer told me that he used their system AI to let it write a report of a case - he spent the remaining hours of his shift correcting and rewriting the report anyway. The AI couldn't even stick to a consistent spelling of the last names! Yesterday, I was asked for my opinion on which wallpaper we should present my cousin as a gift. Most of the wallpaper images were obviously AI content, with objects merging into each other and other inconsistencies. And don't get me started on all these AI illustrations (e.g. for history content) that flood YouTube. As much as I enjoy toying around with AI from time to time, the ubiquity of this begins to annoy me...
youtube Viral AI Reaction 2025-06-15T14:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugxl2qwwaBmELI9O6Nl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-2TuxBYixDwSbkY54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzs1DUaWfQsIlRuypx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzLkqOczU5218C8OxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyxxdiXMmA6VAMg8dB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy9Pr1FQCC1PAF1hbl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxwnbGDrZ4sqRirdMt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyg1AbPJfZyT8wl-894AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzeX__THcraUr3uTfZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw1Rc7UsMwDy-NmNQF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}]