Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"if it's so easy why don't you do it?"... Honestly because I didn't know how I'd…
ytc_UgwiwnOvV…
G
AI IS NOT GOING TO TAKE OVER THE WORLD GUYS!!! DON'T WORRY!! JUST TRY NOT TO PUT…
ytc_Ugw_Xf4w2…
G
I agree with everything you've written here but there is a sad reality: AI wont …
ytr_UgycymUCb…
G
It has ALWAYS been a small group of people who decide how things progress. Moat …
ytc_Ugys8W6Ob…
G
Someone should mention to these assholes that elections have consequences. That’…
rdc_f508xu5
G
I live in Texas and those monster Data Centers are being built everywhere here. …
ytc_UgwVQ4EZc…
G
Oh by the harry potter fanfiction guy? Yeah he's had an AI bugaboo about sci-fi …
ytc_UgyEYCNuy…
G
@macbvalley-1996bruner Thanks for sharing your comment! 🤣 Those AI robots may be…
ytr_UgzwBkAj6…
Comment
Ethically, why are autonomous weapons systems worse than inaccurate weapons systems such as conventional artillery? In both cases, lethal force is put into motion with the knowledge of a risk that an inappropriate, unintended target may be hit. Why is it worse when this happens because of imperfect programming rather than inherent inaccuracies of the system? And, as a practical matter, it's probably less likely to happen with autonomous weapons systems than with systems of limited accuracy such as conventional artillery (or carpet bombing).
youtube
2024-06-30T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwuq0kks4XnzwOPdSl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNVWMd2FhLN53BH8p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvGs30VVT3mIsVNHV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-9WxWVAhTeJMu7d94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjUKdUM4nbP3ZaG3V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXwJ6iWMoStL7tpNZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxh4N8qzPEK7t1zoJJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYjrz6Y8PxnONcfTd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx38UQJHfVitmIoif14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyc1IPkwr5v2vndhmt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]