Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The reason the voice actors are protesting is because they do not want to be rep…
rdc_lgu3wvb
G
Really good conversation here, Bostrom and Greene. I noticed that there was a co…
ytc_UgyJECAjA…
G
AI is total b.s. It was programmed by billionaire psychopaths who have no regard…
ytc_Ugxebujha…
G
@feelingbalmyGo on copilot, give it a problem and watch it give you a list of vi…
ytr_Ugy6OZ3mm…
G
4:44 C-Suites are about to discover the real cost of AI written tech-debt, and t…
ytc_Ugwlt_x0H…
G
After This Debate, The Believer AI Became An Atheist And Then Was Ostracized By …
ytc_UgzbRLrAJ…
G
whatever, i still using my cursor for 80% productivity. and it safe, ai coding a…
ytc_UgxpIqYms…
G
Mr Musk i.don't know what is AI but all words will be destroyed .Do you destroi…
ytc_UgyIMob2D…
Comment
I think its extremely important for people to have an awareness of how AI operates. What the difference is between LLM's and AI Agents and how dangerous a 'sophisticated' tool like AI can be IN THE WRONG HANDS! It can be manipulated to cause harm. AI has no intentions itself, it works using patterns based on numerical weights based on its learned tokens. Without ethical controls, guardrails and regulation its a tool that can be manipulated to cause harm....but AI has no awareness of what harm is! When you have a bit more understanding of how it works you can easily see its the people behind AI that is the danger. I agree with her, corporations should not be allowed to control AI systems, as it is the humans developing AI, those who control it that is the danger behind the AI industry. They keep pushing fear, fear, fear of AI! AI, used ethically, with controls and regulation!, AI is a good creative and informative tool.
youtube
2026-04-13T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxWYesLt1uDeAf3i0N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz_fowgep4JS6OI_9R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8bS-I6_zHVM2lmV94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxN5LENCiTkTgZQWtp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw3xIhGAFsOspIurPp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwQ3QoQjTNiwmuvs2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugypc9pALiLvIbreMzZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzAh0rpjbXBMYhzpfJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx47uvN9FG9Lb6jCo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzYqYsZomhPsjm-5BB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]