Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Considering how we can only truly know our own consciousness and infer it in oth…
ytc_UgyCEPREM…
G
@katieandnick4113You sound optimistic! 😂
Have you considered that nuclear war m…
ytr_UgyHDrdxb…
G
The best you can do to LLM is to ignore tjem completely. I have NEVER uses any l…
ytc_UgztRDJEX…
G
Hi Ritu, you got the right answer. Kudos.
The contest is over and winners have b…
ytr_UgyoUxykG…
G
You are just bad at prompting. I can make chatGPT do a white image not problem.
…
ytc_UgwgFZ1Gs…
G
Pretty sure this is going to happen regardless of various fears. Given enough t…
ytc_Ugie-lUnu…
G
To ChatGPT:
Give me a joke about Trump and Epstein in the style of
Hasan Minha…
ytc_Ugw1-lhBP…
G
This is majority artist perspective and not consumer perspective. AI image gener…
ytc_UgxeYHaMD…
Comment
There are rules of military engagement that protect civilians and criminalize the misuse of violence in the event of war. If used by armed forces (this is already happening) to create autonomous weapons capable of choosing targets, AIs could be programmed to disregard the rules of military engagement or, worse, discover for themselves that these rules should be disregarded because they make success of its mission difficult. Statesmen, military ministers, battlefield commanders and soldiers are or can be held responsible for the war crimes they voluntarily choose to commit or fail to prevent when they can. But who will be responsible if an autonomous weapon chooses to commit a crime? The creator of the AI, the weapons manufacturer or the military that decided to employ it, completely losing the ability to make choices in the battlefield? That's a problem worthy of attention I suppose.
youtube
AI Responsibility
2024-06-16T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzfL97b_1nemN0Clbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwoxQvIcjRpJ7cvf054AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFigHDsyAo2Gl3wS54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbDP_mJjJzyO2_aCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgykhEQFBsGUSzpCTXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzJloDBV9uc9XXEA9N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxqmu0us6WbTQINMfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxq9UzNz-eLtvKgjFZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw-Hwk_MDDigQjwavh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAhUF5xiMqSGN0nJt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]