Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The feedback loop for misinformation you describe is worrying, but perhaps this can be addressed by the programming and training algorithms. It should be possible to integrate proper fact checking in LLMs. Keep in mind that these are very early days for AI. As for the slop, this could be said to be due to bad prompting, i.e. it is ultimately a human problem, not a tech problem.
youtube 2026-01-24T17:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzeridWgn0USXt9EnJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw8vzYQnZLb-hv21UJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzG_CniqQDwt2ff_VB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgywIrNaL_jrzXUZnjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKjug4Y0nqZluV4e94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyBg4klz0iKo75ewml4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzB85FS_mAe2Nl1DXJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyV7fllnqm9AholJaJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDmzYGVe9MpYHjwMd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyo9WJ6cwGtKaeBJE54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]