Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with AI is that by developing it we would degrade ourself from the top of the food chain at some point. The term technological singularilty describes the moment when we develop AI that is so advanced that it can reproduce itself in an enhanced form. This results in exponential intelligence growth and we with our little brains are quickly a non factor and are then at the mercy of our new AI overlords. What they decide to do with us is something we can't predict. Maybe they would treat us good and everything would be great and a new age of rapid scientific progress would begin, even if the ones making the discoveries are no longer humans, but maybe they decide that we are of no use or even a threat and decide to eliminate us, and there is literally nothing we can do about it. It would be like cavemen vs. 2015 US military.
youtube 2015-07-30T09:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugiq7KJ6T100kXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugh365DWKmrW13gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugiq02-FnzwitXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgjPJM6JnogjQ3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggFv-a3g2noD3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UggsIQHlAlQBJHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugg_2NbNeYN8ZXgCoAEC","responsibility":"none","reasoning":"resignation","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugiz180S0BWrMXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugjz03jBITPdiXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugg1h-_yIXiDuXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]