Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Meanwhile, AI knows exactly what it is doing. It's giving us slop that we humans have to take time to correct. It's giving us code that works, but no one knows how ... that isn't suspect, right? Isn't all of this called a red herring?
youtube AI Jobs 2026-03-07T04:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxnypCJyhJp7h22cKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVoiW_PSj5ks79daR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6cHvLfmtASvGMHaZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyLQcsG5GdZaBHBudx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwnOPgjDV6aTOF-luB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzyYqr7zcFl5lETrq94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwcwvYgXrnF-0C3f3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxg8K9tDw-CzeLuE2l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwYNeMmoljc26INoC14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxPi6sgKt3LgQ23E554AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"]}