Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is essentially layers upon layers of socio-cultural dilemas all in one.
Th…
ytc_UgxjlHn5d…
G
Self driving cars, AI… they all work to solve problems that, in general, we don’…
ytc_UgyauiBGr…
G
In ten years from now we will look back upon this time as the great AI hype.…
ytc_UgzxLyYcY…
G
I get the algorithm to give me what I want. Just click onto three examples of w…
ytc_UgyWGLLsF…
G
Not to oversimplify, but AI requires hosting, data, power and leverage. Are we n…
ytc_UgzBCoojZ…
G
I HAVE A GOOD VIDEO IDEA, FIXXING AI ART (UNLESS YOU ALREADY DID THAT MB)…
ytc_UgzlNdUZN…
G
shows us what happen if EU kick Hunagry and NATO do same
ask AI what Hunagria…
ytr_UgxYGmmeL…
G
I refuse to refer to anything regurgitated out of an ai as "art". Ai is just a g…
ytc_UgwINLxmB…
Comment
All these AI's should be deconission the moment they chose to pull the lever. While it's true arteficial intellegence should strive to protect human lives above all else, it is actually better for them to choose to be completely uninvolved even if it means a larger number of humans die rather than having their direct actions actively and knowingly result in the death a human. Why? Because it means that they consider the death that they caused to be "justifiable". That however also means that causing the death of humans goes from an unaceptable taboo to something negotiable. And that should be utterly unacceptable for any AI.
youtube
2025-11-26T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxf8t999huacrULICB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnorwG3ZmPdWxLOop4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyjLGW2ZVRlg-GsYfF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz3zl1Ql6b_Mh5o44x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoP5bp5yKzqRNOw154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxpaRdwt00zwb6Kadd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy94x-LahOCTaQyrRR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwz4V2FEQWRz4tUxo54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz_SsUfAUU4akcWtDh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgweAWkcPXLZ1HZSdkh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]