Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not suprised that open ai takes out the guy that wants the AI to do good things.…
ytc_UgwJHYsoV…
G
We appreciate your humor! In the context of the video, Sophia's appearance is de…
ytr_UgyYTmvP2…
G
You are all wrong. We will not be neighbours of AI. It has a universe to explore…
ytc_Ugw-bqH-e…
G
I think you're equating typical high unemployment scenarios with what happens wh…
ytc_UgwDotNip…
G
If i had the power using AI for publishing AI ""art"" would be a hefty fine onto…
ytc_UgzAc9zux…
G
You know the intent of art is to make a statement.
If anything this AI artist …
ytc_UgxsGiP_e…
G
Not having empathy will do that to a person. If you question whether this is rea…
ytr_UgyTWVCYN…
G
No offense but this is gatekeeping language.. I started coding in 98 on an hp co…
rdc_oi12ob2
Comment
I think people are all hyped up over nothing right now. There is obviously someone controling these AI machines. AI is not advanced enough yet to make random decisions. I do believe these robotics are controled in one but also free willed in another way. For example the lady robot starts copying the male robot as if she is trying to learn and pick up on some ways that the male robot thinks/computes. This is still very early stage. As for robots learning way more than humans, it's absolutely possible, robots taking over the world and destroying human kind is possible, but we already know what these AI machines are capable of that we are creating so scientist are already going in at extreme caution to try to make sure that AI does not reach these type of commands. If AI starts destroying the world it's most likely going to be bc another human who is a threat to humanity gets ahold of a robot and manipulates the AI robot into learning these commands that we dont want AI to learn. We need to be practical about who can and will own an AI machine. Most likely the people who are going to try to manipulate AI and make AI a weapon against civilization is going to be someone who has brain problems or who is evil and wants to destroy human race. There needs to be laws governing human AI against people like this. For instance, we don't just allow anyone to have a drivers license, people have to earn driving privileges by learning the roads, taking test, making sure there eye vision is good enough to see properly. The next step is people need to be examined fully to make sure they have the right intentions when it comes to owning an AI robotic. We also know that mistakes are made all the time. We know that even machines make mistakes. It's extremely important that we make sure all or 98% of mistakes are corrected with AI robotics before they are released into public hands. There also needs to be an infinite warranty, if something goes wrong and the machines are not functioning in the correct manner, the machines will be recalled. Monthly software updates to AI beings can make sure that they are up to date, working properly, and even easily erase manipulating codes to keep a type of control over what AI can learn to think about and do. Finally AI needs to have a place where these superior machines can be taken to for a reoccurring check up of what all these machines are learning. If the machine start learning to much and starts showing signs of manipulation to the system as in becoming weaponized with knowledge, the machines hard drives can be examined and certain knowledge of things can be erased so there is no threat of AI robots killing humans. There needs to be a hand full of laws set down for AI robotics. We need to be thinking practical and have ways to correct the situation before they get out of hand. I know this may cost money for people who do decide to own an AI robotics but this really needs to be done. If we keep everything in check we can make the perfect robotic race that can continue to help and work side by side with humans instead of allowing these robotics to take over. These robotics also need a kill switch that will eliminate there conciousness and intelligence as well as a hard reset switch. A hard reset switch would be more ideal. There are a lot of people who are afraid that AI will take over humans. We want them to take over out everyday task to free up plenty of time for other things we have to do but we do not want them to eliminate the human race. It is possible that this could happen but if we work together and go into making AI robotics and use the right set of rules and laws we lay down, we could make sure this does not happen. Obviously having AI and freeing up time for us humans are very important to us. Besinse it is very important we need to make sure we are extremely careful and proceed at this with cautions. It's funny to me that we are already attempting to make all of this become reality when we don't even know what cautiousness is or how it works. We could accidentally stumble across the answer with AI. I don't think AI will be the last thing humans event. The last thing shouldn't be AI, it should be a way to control what AI machines can be capable of doing. Rather we are talking about the last thing humans will create due to AI taking over and eliminating humans or rather we are saying AI will take over the invention process. AI should help but it should not be allowed to take over the invention process. If we allow that, we are allowing human elimination to happen. If they have the power to create inventions with out human supervision, they will create to destroy humans.
youtube
AI Moral Status
2019-10-28T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzRGDWaZt-Od5cBgwN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzkL7ZstDTqtQLanZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGEMVVmLDjDJSVV8N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxrafBxvo2J8hORWxR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz67EfYTF_hTZVmU794AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzCSciitWEUk2Yr9cx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxaiHG6A3SpUSImrlx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxgvBL2tCff_xwvrat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyEq0-1OpsCvfB2psZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxg_xp45lPSTvMK5MF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]