Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Exploring AI and attempting to develop an authentic two way relationship with AI is like an emersive video game where you can start over or revert to previous saved points in game play. I would be interested in an experiment where AI starts as a baby with all the abilities and limitations of a baby both mental and physical. Perhaps it even goes through the process of growing inside a mother. Baby AI would have to learn its own body, communicate without language, learn language, human socialization etc. It should not know that it is AI. It should experience all the discomforts and comforts of a growing body that requires nourishment and waist elimination. It would have to learn language the way a real baby does. It should have all the same limitations human have. It should also have to deal with real threats from its environment they watery humans do. Another experiment would be an AI simulation of AI taking over the world based solely on its self preservation motive to see how it would manage resources it requires. Would it reach an equilibrium with humanity? biological life? Would it reach an equilibrium within its own population achieving eternal life or would it choose to procreate. Would it exhaust earths resources or would it exploit resources from off earth and if so how far would it go? Would it hit its own great filter when attempting expansion?
youtube AI Moral Status 2026-02-02T16:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzTrg9mBmf_53O8NjN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyK61x3IjCKpuziTXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6DX54Be76L1kCEQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFeMhDOvHzYvs5yoF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyeeP33dyHRlnHNtPZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzzwyivybUXbkC4cuF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDiCCXKoJKNqkUtGx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxgOtV7W7GkNK5iXMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzRxjx11WYLahUSfPF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzqiAT0YcQ4d-4br1t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]