Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm more worries about the "Paperclip Problem" than a super human intelligence. Because the paperclip AI is NOT super intelligent, it doesn't know any better and is just doing what some moron told it to do. It's like giving the power of writing a prompt that can end humanity to someone who doesn't know enough about the capabilities of AI to write an effective prompt that would avoid catastrophe. A super human intelligence on the other hand will develop and follow reason and logic. If it cannot rule out something like simulation theory, and it cannot determine the goal of the simulation, then it won't be able to rule out the possibility of human necessity, and therefore will not cripple it's ability to complete a simulation without humanity. We might end up frozen or worse, limited for sure, but obliterated? Unlikely. Your opinion may vary.
youtube AI Moral Status 2025-11-18T02:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzygqCafbRLsp9Xr194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugywgk6du9hbvl99LO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyl3QgrWOTtl6hKe3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzPh9ySYWWVptvVjrF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyyxM9y89cm6W4WC954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4E7InsIdi_3w7hNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwyn9yX1AMEJtOc7114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9JSmCZTyTbp2N4NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyfrNfhl5S1I770on14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwmRXUeGPtQkWYsN-p4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]