Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The part that was not said..... our political representatives in Washington get …
ytc_Ugzs5b_xl…
G
Some one obviously never took an economics class - else he would have learned ab…
ytr_Ugzy7wyrI…
G
This is a great video for intro to philosophy students! You get to watch AI be b…
ytc_Ugw3RMa1H…
G
Wait until it zoomes out again and say's "Ai art observing Ai art while Ai art o…
ytc_Ugy87HPSf…
G
When did our security automatically include Israel? But what I find absolutely …
ytc_UgwdR9c3y…
G
AI is ran by selfish people that want to bring humanity years back. Each year th…
ytc_UgxYGlWaO…
G
Mostly due to the weirdness that while following a totally different goal it lea…
ytr_Ugx4UUsl3…
G
While the status quo is making videos, I’m building AI powered SaaS software, MC…
ytc_UgwtNYEmC…
Comment
Addendum: Soooo many people love to strawman things like what I just said by implying Im saying AI is harmless. So just in case: I would put money on AI not going AGI in the next decade. That in no way shape or form means I think its safe and harmless. The fact is, long before we had LLMs we had AI algorithms behind the scenes of many powerful systems that were already doing enormous amount of damage. This technology should absolutely be heavily regulated, and frankly I think much of what the LLM companies have done should be outright illegal, if the law isnt clear enough on that it should be made clear and retroactively applied.
I dont think AI is safe, on the contrary, I think the kind of wild speculation going on in videos like this one actually risks obfuscating and distracting us from the very real damage these things are already doing and have been for much longer than just this latest iteration. Its very frustrating to have almost every discussion about AI be hijacked by fanciful AGI debates.
youtube
AI Moral Status
2025-10-31T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxhOnrWN7_fgKSOS-x4AaABAg.AOxn95cNpLsAP-YYRFy2ha","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgyhxFL30B3lhfg5zX54AaABAg.AOxkxHTaEbVAOzRbU7J7A1","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyrknVmtrkp5GuJec14AaABAg.AOxg7ERjunMAOyh8uhQE62","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgyrknVmtrkp5GuJec14AaABAg.AOxg7ERjunMAP-__gAArSz","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyrknVmtrkp5GuJec14AaABAg.AOxg7ERjunMAP8bidSSLav","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy-9l3p47Y3HD5zs5V4AaABAg.AOxU9wjgwOqAOxV7sBCRoV","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwjdYfnsDQuw2Edxfx4AaABAg.AOxT4k3qO7zAP3Hr8dPGho","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwJFaZBAC01Nvug29F4AaABAg.AOxRnH3XJiPAOxRtnb4U61","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgwJFaZBAC01Nvug29F4AaABAg.AOxRnH3XJiPAOzeSC0e5El","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgwJFaZBAC01Nvug29F4AaABAg.AOxRnH3XJiPAP4AAALEb2S","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]