Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've said it before, I'll say it again. Humanity wasn't ready for the internet. There was a time when it took effort to publish things. Books, film, television, no matter what it was it was something that took time and resources, of production and distribution, and as a result, the people who controlled those resources filtered it. They were editors guaranteeing the quality and integrity of the material produced. Now, there is no filtration. You have people farting out low-quality dreck on a blog, and people take it seriously. The obvious stuff is 'fake news' of random people making shit up and posting on a website, but I recently saw someone rant about empty, low-quality, pointless dreck on YouTube directed toward kids. Children's media used to enrich and educate children, now it just numbs them. And now, we have chatbots doing the same thing! Just outright lying with no editors there to check things out! I would've assumed lawyers would've been educated enough to know how to filter this stuff and realize computers could lie just as easily as people, but apparently not! Hell, it was even possible that the ChatGPT bot was citing stuff that it thought was 100% true, but its source was some random blog that was run by a guy who was just flat-out lying, and no one equipped the bot with the tools to know this stuff. This just proves it! As a species, we're too dumb for the internet!
youtube AI Responsibility 2023-06-10T19:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwlnBtrwVRTLP2T7bh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwK8szUvfWy08CVvON4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIDvTw_iiUNxofY654AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzPw6qGfYeArcXbWfZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzcDP_jb0VD0nRPt6R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyotn5LC0DwNiCVcd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwRgBILMyDoVxxzLnd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzxgE165pMA_zSeLM94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWqznGa1a5iIYtiEt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxebunbbBTDeqm3bkV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"} ]