Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Something I'm really curious about is the apparent lack of training materials that remain to be trained on. I've been seeing stuff that implies generative AI is capping out due to kinda having finished with all the training data available. If anybody has more info on this, I'm all ears! Not to say that we aren't already fk'd but I was surprised to hear about this new bottleneck, especially so soon!
youtube AI Harm Incident 2024-07-28T21:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw2XvO6rMmv4w-H7qJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxW7e2gtGlcfRnWpY14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzCA3gqSs9UIm33uTh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxR7a1NzREWTM1Eb-l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy1-NPGQfcgiUIuMMB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzT0g0qhlYU9a31w0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxiDCnIUs2E4PyKfdt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxNBJHPZtpmjiSTOWF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugyf1WTXevcHn37mWn54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyxerqMpB83i4KbP0d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]