Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm way more worried about whoever gets to decide the targets. AI murder drones …
ytr_Ugzy6U4Nz…
G
I just saw a video where someone got second and a robot got first in an art cont…
ytc_Ugw3bN8uf…
G
I love your content and I absolutely agree that AV companies will do anything le…
ytc_UgwA6_Fun…
G
This entire story is made up designed to hype people up and keep investor money …
ytc_Ugy2h2ajS…
G
But what if AI doesn’t take any jobs, and we make people’s lives better for no r…
rdc_l4twk8p
G
Use ChatGPT to make some PP slides old man :)
You are still using hand drawn thi…
ytc_UgzXuRNE7…
G
neural network algorithms being called ai has done so much damage to the way peo…
ytc_Ugy93Tqim…
G
AI requires power! Just unplug! In wartime, power is the first thing to be disab…
ytc_UgxnApSe8…
Comment
You are saying the judgement of the human is the issue. Actually it's just the accuracy. Human soldiers make mistakes all the time, the military has an entire Justice system to evaluate their choices and responsibility. And sometimes it's not a question of a mistake but just the reasonableness of choosing one thing versus another in the moment.
If an autonomous system was found to make those choices more accurately, then your hallowing of the human conscience goes away. It would be irresponsible NOT to equip our weapons with these systems as safety interlocks.
We already know that it is irresponsible to use forward observers anywhere where a drone can do a similar job.
Is your complaint that machines make different kinds of mistakes than humans and we're used to the ones that humans make? Or that we can figure out why the machine made a mistake, whereas we can never be sure with our soldiers? That seems to be a point in favor of the machines.
Self-driving vehicles have gotten us off to a bad start because this is just about the hardest task you can hand to an autonomous system (brain surgery would be far more amenable). The initial intent was to demonstrate that self-driving cars were safer than humans and then the legislation would follow. Designers can be faulted for naivety but the reputation has stuck.
youtube
2026-03-11T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz7bxPrZ4ktDOixWyJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxEzJElYC6nD5dQXql4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugwuy7Lbxw3YDCO2Nj14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw5PAX1-qMY73lXbph4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxqP6yWxaMoI4gI9g14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy4c5nu3GtJxxzL6C94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzj0utiZyrJ0AMN-8t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2r56Rc9FWdFf6Kv94AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzsT-3UIkdWaIEn2Eh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyUedzy5nSjUnuZr2p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]