Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@muai and still didn't produce the art, you made the algorithm, not the output…
ytr_UgzAJXi-h…
G
it is the same in concept , but the effort is obviously 0 to 100 , an AI can mak…
ytr_Ugy5PRVW6…
G
So when presented with violence, the LLMs all responded with violence. Sounds li…
rdc_o7ch3q2
G
I love how Demis Hassabis is in this, and there is no mention that he saved 500,…
ytc_UgzmK1Xgl…
G
So he's pushing for staying aware of AI sentience and how to see it but not that…
ytc_Ugwv-AGBs…
G
My only downside is that whenever I text a real person I accidentally add quotes…
ytc_Ugzx9SAdN…
G
it's pretty obvious, but i used chatgpt a lot. in my experience ai just sounds m…
ytc_Ugxv9Tv8a…
G
I mean wut would you expect a high tech self driving car can totally manage the …
ytc_UgzokQuN4…
Comment
To exaggerate intentionally, maybe don't drop the moon on a house to kill a flea?
Sure, it will kill the flea and demolish the house, but what about the rest of the planet around it?
If you need to take out a sniper, why are you using a bomb?
As you have stated, this is obviously overkill in more than one sense of the word.
If AI can figure out what will happen after a bombing, AI can figure out how to kill one man with one bullet without any collateral damage.
-
It seems that there is some thought that how much money and time it takes to build a weapon isn't even taken into account in regard to the end results.
How much should it cost to take out a sniper?
What is the cost of one bullet?
How much is the cost of a guided bomb?
What is the cost of each civilian life lost?
What is the cost of each building destroyed by bombing?
youtube
2024-07-29T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwtufjJRDpIt3XuOxB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwaENeA222Q9v183KZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzqXdXgwHlwe8tY7dp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaZg1Y8_Dnbo_B-sp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyMTIcd-SAUkvBekM54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuCQIIvVgjDpeKZ5B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxLDj3KAbOsr1XNcvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxa7cdZzRKCQqTY07h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzMOgGbxsrCAfNAkh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx2F8cBOM5pxeBESZp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]