Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we miss the point at which some robots become sentient and still treat them as slaves, they will start to develop a grudge against humans which leads to human extinction. I see 2 solutions: 1. Don't let the robots become sentient. (Not gonna happen, they will at some point so this is the short-term solution.) 2. Treat robots with dignity from the start. That way we might look a little stupid when being nice to an unsentient robot but once the first artificial superintelligence emerges it will not have a reason to enslave us all (unless somebody designed it for that task) Personally, I believe that nothing (except maybe architecture, as in binary vs. hexavigesimal) differentiates humans from artificial intelligence, human intelligence is already "artificial" since it emerges from inanimate objects just like "artificial intelligence" would. To me, people who claim that humans have that special something (soul or whatever) that makes them alive/conscious are most likely human elitist. What makes you think that humans are better than everything else except that you are one you selfish prick? Nothing against selfish pricks, but don't act all morally superior and then start crying "buuhuu but as a child, I was told god said humans are the best". That's the true danger of religion... Wow this turned into kind of a rant... Why did you click "show more" again?
youtube AI Moral Status 2018-04-07T13:4… ♥ 32
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyramFbmtiFDcJAxf14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy88NA-Om8hqQgzB3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRz2EIcMDddYq41fV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzunlRZFgU-88ttIxV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyp5r4jQBKD_rgDaY54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyaSiSdW5ZFjpHi1RJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDAtM8adaVsvjl5CZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxEaKjTvNIXxaGNCs94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxpHMsTwHSW15Xc5ad4AaABAg","responsibility":"none","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwpCqPJKUMQAixZV7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]