Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is that legend around how the first film was shown to people. It was footage of a train arriving. Nowadays we tell each other that when the train approached the camera and got larger in frame, the audience panicked and clammored to get out of the theatre, believing the image of the train might break out of the frame and drive right into them. It is debatable, even unlikely, that reality played out the way we tell each other. Most likely the people of the time were amused and bewildered, but I doubt they confused the moving images for being anything more than that - images. I wonder if the people of the future will look at us the same, with us confusing the false construct of LLMs with something that is real. Except they will right in a lot of cases. What we refer to AI isn't real. It's not intelligent. It's not a real train driving out of a canvas. But of course it's easier to imagine the people of the past being simple (even stupid) creatures, who would panic over something that is common now. I think that's part of what needs to change. We need to learn to understand what's real and isn't real. In a world where we are being culturally trained to get lost in fiction/unreality (how many people put a focus on media being realistic in any given context? How many people are forgetting how actual humans look thanks to filters? How many people are forgetting real words and concepts to service pro-corporate language, and censor themselves?) we're very susceptible to a friendly bullshit machine that eats all our drinkable water. Because reality is culturally optional. We can convince ourselves it's whatever we want it to be.
youtube AI Moral Status 2025-10-31T05:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyUSZEt_D_L-srdtY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwAR-miK3McSNbQPlh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwsUjPct9PdMZ4XAVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwyUpBv84xu5HK-UJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwP7jRZYjiOlpH3Ve94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyAxxpxkUA1pNFS3IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0bCq7miXbvb3zCFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyTcGoRQ6hE812SaF14AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzoyNSQG_OyUFPjpMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz_mNWRN9AgxSfaC994AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]