Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes ai is already better than doctors at diagnosis. It already knows more than i…
ytc_Ugzwul2Zv…
G
Why can't there be compulsory labels on all AI content. I'm sick of AI slop…
ytc_UgxyI8X_H…
G
The Grok one is easily explained. it wasn't losing it's mind, it was following i…
ytc_UgyX3yysi…
G
@icecreamaddictroxas Python sure would be a great option for something like tha…
ytr_Ugy31RbXF…
G
I feel like there should be a minimum age to use these things. At least 25. Obvi…
ytc_UgwLXJqn2…
G
"All our agents are busy. Waiting time 30 minutes" this is what AI should replac…
ytc_UgxpPfMcT…
G
None of the AI infrastructure are currently earning enough to pay for themselves…
ytc_Ugy9OSeAf…
G
It's refreshing to finally see youtubers get this right. Disclaimer: I'm princip…
ytc_Ugw35L5ew…
Comment
If we have a system that requires constant profit, if it's designed in such a way that lies, cheating, exploitation, and ruthlessness lead to greater success, then every technological advancement is dangerous. This has nothing to do with the things we create, but rather with how we handle them. AI is dangerous because it's more profitable to misuse it than to use it wisely and carefully. I always see symptoms criticized without exploring and addressing the root cause. The problem is us, who have created a self-destructive system.
youtube
AI Governance
2025-07-09T09:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwyJmd7ose-8ka9vOV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzedQ5MeBx1xn6E9Jd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwEhDsXqwf1M6Pu3Kl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnOvAxm935GD7IL294AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyz3LOElt0D9WIleUp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyK0QO3TLr29xgvjKd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUmNAWOX38pM06cId4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzMi5l-AWi39vbmx9t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgydYne2z7N2wOYqqk14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxtx7J6akadqDSPLQ54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}]