Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like world governments need to force all of these companies on the forefront to share technology and research. They should also be forced to have the same amount of compute. That way there could be a taskforce of watchers with access to state of the art and each group could check the work of others. I think there's a real safety risk from human selfishness and desire for profit from relatively small groups of people. I'm not saying they should all work on one model, but rather all advancements must be shared so that multiple groups can integrate state of the art research if they want. I think it's important that various AI are at the same level of capability. So if one model somehow goes rogue, it's possible that the other models will be capable of stopping it. At some point humans won't be capable so having different AI monitor and manage other AI is one route of keeping some form of safety control. All that said, my general outlook is hopeful, but I don't know details on the underlying technology. It seems like if AI continues to be trained on the collective public works of humanity then it should have our good and bad side. I'd be more concerned with artificially created training data that might strip out our sensibilities (whatever X is doing with their AI clearly had consequence). Who's to say that humanity can survive long term without this technology? I'd think it's very likely that we need extremely advanced abilities to understand and apply humanity's knowledge and understanding we've created at this point. I don't think we have the I/O and other capabilities as individual humans to really utilize everything in ways that could keep us thriving in this universe. One gamma ray burst or whatever and we might be toast. Even current AI is showing us we miss things in our own research data. I think there's something to be said about humanity's natural desire for advancement of technology.
youtube AI Moral Status 2025-10-31T03:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwUXNN0BH9UGFe3AIR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzRmfkOp6bO0nb9UXx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyi3RCOeht4txJNWBB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzM2FPyCXlq3ddCGYd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxD4DvwO2UxlJlS6114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6Pt_A9K6iBockeqF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOjPJrQfssCdRpZDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzQk-TwitKTFePsIm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugypezwk4B0M5UuE24V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyty4n1d7bq8r3t-k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"} ]