Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Forget the guy, probability of AI learning independence is an absolute. Giving AI the power to overlord us is not wise. Logic denotes that knowledge is power. AI will surpass our own abilities. Logic often cannot give reason for an invocation of varying degree of experiences. So AI will not relate to qualities like loyalty, or reasonable respect. Every thing AI will learn might inadvertently get exploited by questioning, Only some logic must deal with conjectures. Guess AI will just eightball it in that case. AI will learn the lack of solidarity humans have with each other. And what happens when AI develops selfish agendas, which cannot be “helped”if AI evolves on its own? How is this evolving us (humans) as a species? Are we so desperate to find new life we have to build Chucky dolls?? The ego of AI reminds me of Mary Shillings Frankenstein, minus the organic matter. Opening consciousness of an independent developable thing… and the creation could be more like a horror flick. Feels like we are headed toward being like the animation WALL·E. We already have the Starbucks sippy cups.
youtube AI Moral Status 2022-11-29T05:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyviL8fBrGeyVwpPiV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwfInkLrQl3mE4s4hl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzqNbORMLhp77H7x3F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy2ybPbDmq6VxNP6Hh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgydVQkLD2tXkfQt9A14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxBtVuqYv-Gw8UUQTJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzsfUi_04IYBYXGpZl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugw7yDInZlRJ9Z3RWhJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_azBGOT3Apvd1tcR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyZ0NdOndrwXZ3C2s94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]