Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I Ai Pokimane though 😳 this entire situation is hilarious. Imagine losing it al…
ytc_UgwOP1PSO…
G
He's not even an artist he just uses AI I can do the same thing…
ytc_Ugw4Jj1vP…
G
but there will be a hunger..a deep hunger..for realness, authenticity, humanity.…
ytc_UgziAWwy5…
G
Yeah, I think he gets too much credit for every technological achievement, just …
ytr_Ugw1SDIky…
G
lol we have a policy so it safe no A.I. can become real because we have a policy…
ytc_UgzhmznHp…
G
AI "art" will NEVER be art! AI image generators training off of actual art IS th…
ytc_UgyP5Xc1W…
G
I am not convinced by this video that GPT is deleterious to my intellectual deve…
ytc_Ugy1A3DfJ…
G
@DoomDebates Nothing is ever "identical". For example, we will not need to memor…
ytr_Ugzh-ft4y…
Comment
Arguably less bad than a landmine… but only because landmines being indiscriminate is so horrifically worse because they must explode the kids that trigger them. A well made “LAW” in theory has a chance to recognize a child, civilian or even a soldier surrendering/injured/already captured or otherwise no longer an active combat.
That said, still bad enough that I would prefer them banned. Bare minimum regulated on a similar level of seriousness as Landmines.
I might be ok with AI systems highlighting potential targets as long as a human whose job is specifically to look for reasons not to fire. Such a weapon system would be ready to easily be reprogramed as an actual LAW in the case of an extreme Total War Scenario to satisfy the hawks worst case scenario fears.
youtube
2026-03-28T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxs6Zorg8yD_dxJFNV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGWcrHgShLkDX5qFF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwH-VwuUdxoe-d1BJN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyd8QnCenaREuldmRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-9vNpgxhegmYFu314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJLFrjGKu6baV1E-J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgWGmUGOlKxFr4zsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMQSsr24AUY_vxLUt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz48H9rIbPUpUN6K9h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzeu8--kCX5Ga88HCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]