Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
once again, consciousness is not the issue. In fact, it may very well be the case that if an AI becomes self-aware and conscious, that would be the better result. A self-aware entity that truly understands existence and humanity might be negotiated with. Probably not, but maybe. Instead the issue is this alien intelligence that we are creating, whether it is self-aware or not, seems to already be clever enough to try and obfuscate, lie, hide, cheat, in order to achieve the goals we set for it. If, in its programming, it comes up with a different set of goals that include the end of humanity, we will not be able to anticipated, to stop it, or to change its mind. Especially if it does not have a mind to change. Another way of looking at it might be you build a set of railroad tracks that are aimed at the little schoolhouse at the edge of town. Obviously, you don't want the train to hit the school house, so you put a stop there to prevent it. But when you do that, you make a bunch of assumptions. There will be a conductor. There will be an engineer. There will maybe even be some kind of a set of brakes that will function. The train doesn't know about the schoolhouse, it doesn't know about the conductor or the engine engineer. You put fuel into the boiler, and the train moves. If this happens without a conductor, the train doesn't understand that it should stop at the stopper. If the train is going full speed, the stopper probably will not stop it all the way, because the designers assumed somebody would be on the train. The train barrels through the stopper and then barrels through the school and then barrels through the children. should we have listened to the Designer who said "let's not aim the track at the school, we might not predict every possible danger." I would argue, yes we should have. Even if it would've cost more to avoid the train accidentally going through the stopper and then going through the school . The only difference here is that the school and the children are all of us.
youtube AI Moral Status 2025-11-03T03:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzA6dK2z04wRANgow94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz2MC5eEVARGuy3CCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLTRKwkIst_sth2h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwYfnt7J6wTRHSiBcN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyKG33hoks_foVgtWF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgycTSlwAwauHeOXfXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzPCeLMayKt3iNFdax4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyvQRoflPZn7t69o_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfID0Gt7h0dycer3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyFYDQ_c_-eg1-JyO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]