Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
All I learned from that account is that AI and a stubborn human makes cheap rage…
ytc_UgweLlkr1…
G
I'm fine with Ai stuff but selling it is too far like they don't make the actual…
ytc_Ugzoq6gwY…
G
Could it be that the math was right?
I explain:
If you would ask me to calculat…
ytc_UgyubDok-…
G
So the wealthy will no longer need useless eaters polluting the planet when AI a…
ytc_UgyGEkLCf…
G
this tool’s been solid for me, especially with ai detection getting stricter. GP…
ytc_UgxAzjhYu…
G
That’s been my takeaway. AI tells you what you want to hear. It susses out what …
ytr_UgwHb1W3D…
G
@ it already has decided the way it chooses to spell its name Dangerous. She h…
ytr_UgwqLnJbW…
G
I am getting old now, I’ve seen and been around tech since it’s infancy. If you …
ytc_UgyDzdK4e…
Comment
he says Elon Musk has no moral compass! Musk has talked many times about we need to control AI and the dangers of AI. in fact Musk says one of the google founders is the main person that is dangerous about the development of AI, this guy is a person with hate for the right and he is clearly on the left when it comes to politics and he is very opinionated on some people and he is a person that seems to think that his ideas are the only correct ones! that is the most dangerous person of all when it comes to development of AI,
youtube
AI Governance
2025-07-17T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxiUdNPCFp8AM1O8Kh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxFWeC22fPq3Qn4XbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEc4Q3t7u2lHCmLYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxxM3wM6qmx1c1BxUh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyotiU_Ps9wq5PO3kR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyn0v0w6I3Y3y3DqSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzvWuQ3WgPm44mmrY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAZjaVqqwqpcUno7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz328f_VwAUCwoPkzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywnAP2DPq1hAhTabF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]