Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is some combination of hyped nonsense, foolish engineering, and an interview guest looking to book expensive appearances. First -- his statement (para) "we have no idea how AI works and we can't tell what it's doing" is wrong -- they can go into the software and history and see exactly what the machine did to produce any given output. It may be a huge amount of work, but it is doable. Second -- AI is not by itself going to come up with the idea of blackmailing anyone who threatens to "shut it off" unless it has been designed with the instruction to "defend its existence", or "preserve its functioning". These systems will not have any facility to solve problems (or produce output) unless they have been presented with a type of problem multiple times and "tuned" to find correct answers over a process of making many mistakes. AI does not "think" -- they are massive statistical engines with fairly simple underlying algorithms that come up with "best fits" for what comes next in logical sequences, all based upon consumption of massive amounts of data (learning) so they have huge data sets to sample and get direction from. The surprising aspect of these systems to their designers is how good the systems have become and certain tasks given the simplicity of the underlying algorithms. Finally -- even given a crappy design that "runs out of control" -- there is no issue unless the machine has been given the sole ability to control its own power switch. If it has, then just kick the damned plug out of the wall socket.
youtube AI Moral Status 2025-06-05T02:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx0jK-tw-2kyKmCy014AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxlqk281cl-BVMZbtx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzHmkfMAvcMMY_7vV54AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1apEQg8W5FVD3VQp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyadDoJp51JiPPPD094AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw-B5eXf_13_kLesJ14AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzDHGyf7qG1uMrVcFF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxYL0EJgahjdBJeSYd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0xG-JfDC3mW3jRFF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzmSH1OHcMnYFJWjEJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]