Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Actually coding AIs can be forced to limit hallucinations, as, similarly how the video mentions, thruths are entangled. And if the truth (working program) contradicts the AI, it sometimes admits that it does not know how to continue.
youtube AI Moral Status 2025-11-25T22:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyqjsqfuqVEzcWSs2J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzPQFLqvk0wc_pE3Ed4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1xsc3KYPErxaMBOJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz8CmJOUD1ilLkOLYV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxnEZq9A_8AvcwIUZh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzo9TpISftgSBk6lJh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy5VOFsmhM_o97_oKJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxEfsiPeWqSH7g9Wht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxCX80X9CDEhqTQ-PN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzrWfDlRKW3Gxhp7zR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]