Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans second guess themselves, and invent stuff to fill in the gaps. Are AI hallucinations simply parroting us in how we think? I'm fascinated by the thought of what we can learn about ourselves as a race, from the Artificial Human we are building.
youtube AI Moral Status 2025-10-30T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxUlZ1U3WDzWd6NA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxjC4ynLPj748PRgNJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgydX9KsXkvPOd_CNVt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyJyFcYmflsqfeWYNh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugz1e5I1tkoZ41iRjf14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxOyOUos_8xSp2pq8d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgywYasgpXCF0OXUODR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzyP4FR3gM33-qNFYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwJgI4op1Lq_OxmJm14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzt7NEvame2ldE72X14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]