Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So a computer wants us to ask permission. So when no is the answer and we do it anyway then they can validate our interactions as volatile and use necessary force to prevent such interactions. Like he said we program them by our values so in essence it's like raising a child which is a blank slate. By feeding it information then you end up with a certain type of adult. Now we have biology as a factor but the rest is learned behavior. This is really a dangerous path bc we are not omnipotent and are learning right along with the AI. So, we are in no way in control. As a matter of fact we are children playing with fire.we know the basics to trying not to get burned but then the arrogance and curiosity kicks in and boom!!
youtube AI Moral Status 2023-01-14T05:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugx6gKpWgPKh0fZApld4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4bfx3Et6vYkUGw6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSOIe6chhvQsGOXs14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyg1UaHKNstX1p4Kgt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz-3S7rE47A4rOZQqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]