Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So the coders who are coding code, code code traditionally but the coders coding…
ytc_UgxcPp0-D…
G
Whiney annoying voice. If you make good art, AI won’t replace you. There’s room …
ytc_UgzTcbHKu…
G
During the industrial revolution we were concerned that the machines we built wo…
ytc_Ugy5KzzTm…
G
Good AI users use more complex workflows like Krita diffusion. Sure, theyre not …
ytc_Ugx9-10hH…
G
Either humanity will be eliminated by AI as soon as it is able to sustain and d…
ytc_Ugw9TBokq…
G
Are you political types every going to realize that ALL CORPORATIONS DO WHATEVER…
ytc_UgxksrJoB…
G
Deepfakes are the most pointless use of technology. Technology is suppose to mak…
ytc_UgzBL5EZ8…
G
🔥 TECHBRO NPC MODE ACTIVATED! 🤖💪
Here’s 5 talking points to DEFEND gen AI as ART…
ytr_UgzYtFiLo…
Comment
On a serious note, these bots should not be allowed to give people advice in many cases. I recently found out that my father was asking chatGPT for basically medical advice, believing that it was accurate because he heard AI is making advances in medicine (something other kinds of AI have done, not generative AI). I've also overheard people in hospital waiting rooms asking these bots for medical advice. And sure they put a small print somewhere in the UI telling people that AI makes mistakes and to verify the output they get, and people should know better anyway, but sadly a lot of them don't. AI companies surely know this, but they already made these slop machines with little regard for ethics so it's not surprising they don't put more effort in preventing harm to society.
youtube
2025-10-24T10:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyqAInQy3tFIEU76EJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3ifvOAbxaQA5tnRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZaBe5NAFH9JcUPE14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz47ZmbLzgbNP4KyN54AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugww8qF7mCCD0p45LBt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIY_Z2wsdGdpxOzdV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyptdHa7D9Aid8VKWp4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwHVTwlWMjJ6rszXgZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwGEdSQ9TotZ6m4PFF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxxnT005gSyeTMTFv94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]