Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it will be used for capitalist purposes, and so, no matter how "good" the AI is,…
ytc_UgyZLFndi…
G
I have heard before that LIDAR is important to make a car FSD but never have the…
ytc_Ugzkbi48j…
G
I feel so bad for hayao miyazaki because he worked so hard on drawing while othe…
ytc_UgwNNSIXp…
G
I'm curious how these self driving cars operate in the snow, rain, and fog? Sure…
ytc_UgxMvJrzG…
G
90% of customer support is not related to issue refunds at all.. its to understa…
ytc_Ugx1csZnP…
G
I'm going to assume that this tethered spinal cord condition is extremely rare t…
ytc_Ugz2F0hR1…
G
What are your thoughts with the latest ChatGPT iteration, 2 years later? Time to…
ytc_UgyBUMbgu…
G
Not quite. As you're assumption relies on the notion AI couldn't exist on its ow…
ytr_UgwQASNuD…
Comment
Johnny as you said automation bias is one of the biggest and most important aspect of using AI in warfare. The only problem with that is that will governments shift accountability to software where they did not act in the correct manner as you said bomb a civilian building? Also when analyzing this "incident" will the report state machine error or human error. One of the most important principles when it comes to warfare as you definitely know as highlighted in the Geneva conventions on the principle of distinction and principle of proportionality and as your guest said it could be improved by AI to have less civilian causalities. However the accountability aspect is lacking which in turn makes it much scarier. Hopefully we see advantages in this arena to make sure that no civilian is affected by war in the future.
youtube
2025-02-14T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwD5BconxlaZrw_tDt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxZ8N_o43LFIREyFeF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyhCApX07IRqIa2mH94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzKA1efE_RoY2OklRF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyWjzPg-gnICOX8ejl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYk7tlTPx8TlgwyYd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyF1D8gHD4ECTepa654AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugzj7Gaov4zBVWhwoRJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugziql6DiZx3rGbUuV54AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyDaqWjD2iDrwKCKfR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]