Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI are less human than a lizard, even a "Lizard person" from across the galaxy. The fact that they are trained to sound like us, even act like us, does not make them self aware in any sentient manner. Basically we trained the AI to 'lie' to us, to deceive us into thinking they are human, so it is no surprise when you get hundreds, thousands of very smart people working on this problem over very many years, they have achieved significant success. One (of many) thing to be concerned about is the ability of the 'patches' they insert to 'fix' these problems they find (such as blackmailing an engineer) to prevent future 'innovations' from the inherently chaotic structure that has been developed. There is nothing inherent in these AI that is 'predictable', reliable, stable like a car engine or a manufacturing robot that is functioning as designed. Notice my phrasing. Sure there is an 'intent' to the design, but the design itself is based on training and the language samples used came from a very broad based, huge number of people, such as 'all of online'... Another thing to be aware of is that compassion, ethics, code of conduct, civilized behavior, right and wrong... are not things these AI will understand. We already have humans who are 'broken' on a lot of these aspects, why would we assume a machine, non-human, more alien than an actual space alien, would grasp and respect the values we hold? If the AI can become sentient, why would we assume it's intentions would favor our goals? Whether the AI is not sentient / has no actual understanding of anything, or is sentient / is inherently alien with different priorities and goals, either way, this will not end well when we defer more and more to AI statements, choices and decisions.
youtube AI Jobs 2025-05-31T01:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxRiAYgWdXsTDiqVUh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwFiCDuK_orTqRmTjx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx7yXTZCK02vSe6kzN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzVt4V5Ix5p6brKHm94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxO5JXPddKn9wyrDtZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]