Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@SeanBoyce-gp literally just google "ai safety research" or "interpretability research" and you'll find more papers than you'll ever read
youtube AI Moral Status 2025-10-30T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugxq0yS22fQI8RihVXt4AaABAg.AOv0P78xrTEAOv594lsfxo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugxq0yS22fQI8RihVXt4AaABAg.AOv0P78xrTEAOv8yFT8nZ2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugxq0yS22fQI8RihVXt4AaABAg.AOv0P78xrTEAOv9OOvNtcG","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytr_Ugxq0yS22fQI8RihVXt4AaABAg.AOv0P78xrTEAOvBV-etcju","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz7To3N3bTqWHRXAWd4AaABAg.AOv06v6ZjMRAOvCdhnFhwJ","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyoHjqod2B0445XdM14AaABAg.AOv06KswlguAOvBMsFJY_Z","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytr_UgxzjRJ_c919F5GSn054AaABAg.AOv019i5SWDAOvBDGMGRyV","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyxEpNR4j8RqX2xfft4AaABAg.AOv-vpVmC3EAOv8xrvYS4k","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyxEpNR4j8RqX2xfft4AaABAg.AOv-vpVmC3EAOvA6t3zNbc","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzczosQYWlhu4gNJCl4AaABAg.AOv-s3FOmrWAOvCONgyiSg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]