Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like the question in the beginning is odd, in my vision of future AI at least. Individual smart-home devices shouldn't have their own fully fleshed out AI. For instance, if a toaster DID have an AI, it should be limited to learning how to make the best toast. The details about the owner's preferred toast should be left to the general AI that controls (or integrates with) every appliance. Think about a personal-assistant AI like Google's assistant, Amazon's Alexa, or Apple's Siri. Those personal assistants would learn about the user and learn how to best appeal to them. It would then tell the coffee machine how you like your coffee, or the toaster how you like your toast, or your thermostat which temperature you like the house to be at, etc. So if you were to replace your toaster, the new one would pick up right where the old one left off because the true AI controlling it is tied to a device in your home, with the data stored either on the device or on the cloud (so even if you replace your Google Home or Amazon Alexa it would still be the same AI controlling the new device) If individual robots were created with their own unique AI, I could see this becoming an issue much like how the Omnics are treated in Overwatch (I am pro Omnic-Rights, if an AI can recognize respect/disrespect it should be respected just as much as a person should)
youtube AI Moral Status 2017-02-26T02:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiNg7c3pMvULHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Uggy6hX90V8Bj3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiG5E4qLkQiQngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugi92fRXPQ52ongCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugic-HoP48D9angCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiIqU7cp0FOSHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjHvbFW4ZIPD3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgivkYbakNg6Y3gCoAEC","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"mixed"}, {"id":"ytc_UggjnKNVIubF5HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj_DdZ4a5jyVngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"} ]