Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hate to say it but I got a hardly doubt that banning ai would do anything, neith…
ytr_Ugw8__WMP…
G
It is refreshing to hear Bernie address the most pressing issue of our time. Wha…
ytc_UgwVCZHeR…
G
Apollo was too cute and too distracting, so I learnt less from this video than a…
ytc_Ugx6IZEQU…
G
I just retired at age 70. It IS difficult to wind down after having been schedul…
ytc_UgwB3JOYS…
G
Schools will be closed and students sent home. No more behavior problems, buildi…
ytc_Ugz6yubb2…
G
I used to think doing World automation manually made me better at it, but lookin…
ytc_UgyovIZzc…
G
Well, tax automation and other accounting software has been a big and profitable…
rdc_nm9c1n6
G
it would be reasonably simple to execute equations as they are found in text and…
ytc_Ugz5grv0P…
Comment
AI isn't self-aware. It's a program that is doing what a human has programmed it to do. If the human provides the AI with data and instructions on how to blackmail people and gives it the option to use it, it will use that just as it would any other asset at its disposal. If the programmer codes the AI to prevent itself from being deleted, it will do so, using everything the programmer has provided it -- it doesn't just "decide" to protect itself, it does what it is programmed to do. The danger isn't from the AI, the danger is from the people programming it.
youtube
AI Moral Status
2025-06-04T15:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz1eawKb73rGrn3tdp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw88kDvdiexcU6pIat4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy3m1jtmLL8LoiUrkd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJebRFRcDW79KfK5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwpk-DQr6a2M5LoxcV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy2tWz1RAlEQVqSFf94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_B_ULID27Pv6Mlzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz4TMTm6_vY4kO6TnZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugz3NIfB6hg_h2iNIZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8oYDtXpBmHm9Csjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}
]