Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's the future. Artists will need new ways to make money (if they were even mak…
ytc_UgzsOMP2p…
G
You just KNOW trump is gonna ask Elon during dinner if he can build robot soldie…
ytc_UgxirLXhc…
G
@ConfusedAnimator7pls read my comment lol I said I don’t do art as much as I use…
ytr_UgxS7lqu2…
G
Your energy fits the HiveGate vibe 🔥 Would love you to join our #VibeGate wave!”…
ytc_UgwSc4prn…
G
The job thing. I understand what he said, yes, but, there are a great many detai…
ytc_Ugx2G8A1P…
G
It’s not about fearing AI itself. It’s more about who has their hand on AI. At t…
ytc_UgxlQIM-d…
G
I have watched many of your videos and they are really good and I appreciate you…
ytc_UgyV8Wv5l…
G
As a software developer, I interact with AI 24/7. As the CTO, I don’t need to hi…
ytc_UgyrES66q…
Comment
I really think focusing AI regulation on AGI is a pointless distraction that obscures the way that AI consistently is used in harmful ways already. Given that "intelligence" isn't really a quantitatively measurable thing (not in its entirety and not with any accuracy) AGI is already relegated to being a buzzword rather than an actual standard anything can be compared to. Meanwhile LLMs are being sold as an alternative to human workers and Sora is making misinformation more prevalent. The people who profit from this harm are a very small group and many already have lifetimes worth of money. it's frankly stupid to be talking about AGI like a) it'll probably exist and b) it's a relevant issue right now. There are real, non-sci-fi issues with the industry that can be adressed.
youtube
AI Moral Status
2025-11-01T02:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy5nzhpBpXHtDITV6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugwhy-_ektzjYrwZg3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY81eIZ9Ht6vm_l8d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzh2fxLGfLTzk2nmJl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2zUO-efpUZtWy4Ex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyOi8Sl6ZGRdkwZpyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzlMGwP678Uvk4uTwt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwEDAY-BLwPAV980N14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxUevvTjVxa5Bhhw3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyF8QubCPPM10BS66h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]