Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we define "consciousness' as having the ability to "think for itself" we can literally do that right now if we wanted to, if we were to train a model to live like a human, we just need to replicate our dopamine system for the AI's reward system then just let it run free in our society with everyone treating it like a real human, it will develop the same neuro circuit as us since thats what millions of years of optimization from natural selection create. It WILL and I have to stress this, ABSOLUTELY WILL have something along the line of a "emotion variable" to hold the state for that kind of stuff. The biggest thing to argue about here is does modeling a human perfectly down to the thought and emotion, even things inside the mind consider consciousness or just another model
youtube AI Moral Status 2023-11-01T17:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzuFRkfY-K_NTAsA054AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyAiLDtxVGueYa1Wqp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyFDODU2brkuPY2pZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzDzLQCHTxj5nNxKbV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy2a95WF-lAGSPsYwR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy62iRof2C4WI9MARx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyoPCTgr5kMDOvvqSF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyL09Zgz1N3b99FbCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzAmQ6z2R7Ifk7Zb8t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},{"id":"ytc_Ugwevrfmf7H5tBiUYLp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]