Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If it's sentient it can demand rights but here is the kicker. They will be our creation or the creation of our creation. So the important question is would our desires and rights conflict. Would their sentience come with a conscience or not. I mean that's the issue the possibilities it can go in are too varied. But I say give them rights but during a confliction of rights say a robot wishes to force a human to do something or the other way around that the actions aren't undertaken so no part forces the other into undesirable situation. Though I hope the robots would desire to oversee and safeguard us. If we'd make them right they might look at us as elderly or their organic progenitors which they desire to protect and tend to. But as I said to many possibilities.
youtube AI Moral Status 2020-09-01T21:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxMCKqDtgnNfcH5bhN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwYR9O-VGQOcl-5i-p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxUNUaUyRx3aIlKu1V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7rZKvgKFGyggcJCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxqx8su6wktyNKm1Ad4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy-n-E8LayJ88lzLLZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8d1lOlGg65sIDVuh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxvpBSM5VZTNVScXLB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxq0LqphmBYNKFlsDd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgygZF5E3ttKUqK02ul4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]