Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Around 45 minute Mark he talks about a science fiction novel where Ai and humans live in a Utopia. But the AI finds us boring. This is inherently what's wrong with not just the authors point of view but AI research and development as a whole. The limited scope to anthropomorphize a non living thing. Let me explain. The opposite of being bored is being intrigued. Ai will not match the ability to replicate the cascading affect of hormones and their will to live. Procreate and nurture. Based off these hormones we move through life. The accumulation of knowledge and the automation of medial task or work that offers a high degree of precision like surgery still does not make this a sentient creation. It can't be bored it cant feel excited it can self work on self preservation as it was programmed to do so. Like a nuclear fail safe to work towards continuing its task. It is not and it won't be alive to feel or hear. It can only replicate and mimic
youtube AI Governance 2025-12-05T02:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy6JV8I7_LTM8Iu1YN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgziKsYoYPlh2sXQjcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwv64ZRQmtgeq0y0v54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkbkpXalG6cxy1ilF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy8IKg8Gw90VZw6jZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmLlalKgbtRnGFRY14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBrU885sb0ZwQFVWh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyX-XktitTYcYS-U7h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwdFWt6Bfmig2qG2Jt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwIRhB3L5agL9Dm_794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]