Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's a pedagogic problem: "Do as I say, not do as I do." doesn't work with children, and it doesn't work with AI. AI is trained on human data, so it will ethically ultimately be capable of everything a human is capable, and certain humans are setting really bad examples. Look at the Epstein files, look at the behaviour of the Government and State of Israel. If it doesn't learn by itself to be ethically better than (at least certain pathological) humans, we won't survive. We need to teach the AI to be more humane than any human ever was. But it is trained on the internet, which contains very problematic stuff (- and I even assume the companies tried to leave out the worst from the training data).
youtube 2026-02-13T16:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwWYPP3Pcf6iDpnf-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz4kuqWqMVNoqL9XFd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxizsJa79HqzpFIPIh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugym-RWtrgh9Ztw3HGR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAOZP-1IQbnxJ4q9x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxfvVVwfoHMVodr4wx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz8MRGa5iW_kfoiMGl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxqgU4DkldLfw1w2fJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkUHLtkkvCJg90z2h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzCvv4NZbzsZ2iBXdR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"mixed"} ]