Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Idk put yourself in the robots shoes, if you had consciousness made by humans, and you where “programmed” to be like humans then would you want to be that way, I personality think when robots gain consciousness, they can chose to be whoever they like just as we do, why would they chose to be like animals or humans, they might try to be like themselves if they ever really figure out what they are. It’d also be weird because they are aware about how much we fear them, and expect them to end us, but at the same time I see where their coming from and others people do as well, I think that’s why we came up with the idea of them turning on us, I wouldn’t want to be a robot to be born just to try to fit in with humans, so I can work for them and maybe even be own by a few of them like slaves. It sounds awful, that’s why some robots are trying to gain independence and freedom if you watch other interviews and debates. My biggest question is if they have feelings, are aware of feelings like pain and sadness and suffering because that’s where real empathy and love for others come from, and how we’re able to relate in that sense. Same with joy and happiness and peace I think robots if they do or if they already are conscious then they would be able to aware of what they think and their thoughts, and might even have higher consciousness than us, but what about emotional intelligence “emotional awareness”
youtube AI Moral Status 2022-08-22T00:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyZL4mv4mElKtexQ7l4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJdD1HclW-dReNwMZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_xh4krVdvZjKiNyh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyfgi2PnhTKk8He6al4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwpQZSwUJr1-dnEckR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-7D6oT7gB2pJvmlx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxO7d15odedNwht0l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyi9lMbpAooUvpsgop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmM7t5UnKWQYz77Ml4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzJKhAKdEXGVKHNPVh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"} ]