Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Idk what YouTube is smoking and idk if this comment is going to get repeated a b…
ytr_UgwMM1Cvw…
G
dont be bothered Bernie in 5 years we all are vanished since AI will kill us lik…
ytc_Ugxgyr4-m…
G
The problem is exactly what he is describing at the beginning of the video. Whil…
ytc_Ugwtfob52…
G
my items that aI's are taking over is making terrible mistakes which is causing …
ytc_UgyB_Utga…
G
I think that I robot it’s the perfect reason of why we shouldn’t let robots lear…
ytc_UgyXUFRkY…
G
AI can react much better than a human can it has saved countless lives and it wi…
ytr_UgxmOTffp…
G
The jobs for creatives are/will be harmed, with or without licensing. Licensing …
ytc_UgxH29lo0…
G
Also how dangerous it is to be spending the vast majority of our economic invest…
rdc_nue6wpq
Comment
""Now imagine if you could make that AI a intolerant religious fundamentalist? Or to advance the "company's" profits at all cost.""
The people smart enough to make advanced AI are smart enough not to do this. (unless the AI is created by throwing compute at existing techniques.)
""The other nightmare of course is that ethics isn't supported and you get a totally ruthless AI that is in effect a sociopath that sees Humans as a threat.""
You get exactly what you program. The difficult part is knowing what code will produce a design you want. This is a tricky, but probably solvable problem. If this problem has been solved, we get a world where the AI's morality is controled by the designers, otherwise we get an AI that doesn't love you, nor does it hate you, but your made of atoms it could use for something else.
""But I like to think that a fully self aware conscious AI that likes to think and problem solve will love Humanity, because we will be so intellectually stimulating - our problems and issues are so complicated that we will always provide a mental challenge and with us around they will never get bored.""
Not all minds feel boredom. The AI might find very hard maths more interesting, and kill off humans as it turns all available mass into computers. Our problems probably aren't that complicated, compared to a superintelligence. Coming up with new and creative ways to make humans miserable would be just as interesting an intellectual challenge.
""Or an AI could be created with whatever biases and beliefs that the designer wanted and they would "keep"
Now imagine if you could make that AI a intolerant religious fundamentalist?""
If a sufficiently competent AI programmer wanted to, they could build an AI that was a religious fundamentalist, or a flat earther. If your AI is basically a mind uploaded human, then they would have the same beliefs. They would also not be that dangerous. (A flat earther running at many times human speed would still not be able to come up with a plan to take over the earth that would actually work) An AI programmer could build a system that reasons about the world based on evidence, and then add the hardcoded belief that the world is flat, but even if they think that the world is flat, they have no reason to do so if they think the evidence will convince the AI. Unlike factual beliefs, questions of ethics have to come down to the programmer somehow, but it can be by some sort of "If I obayed the ethical principles that I would endorse if I thought longer about ethics " The more you can make the instructions abstract meta level principles, the more the AI can fix any mistakes you made going from the general principle to the object level decision.
youtube
AI Moral Status
2020-07-08T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugzn5INNq5XxmGhCqAt4AaABAg.9AqcAFEtzZK9AvIskRS206","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzRKHbBfFUf-rSVN594AaABAg.9AqbkueXgTi9AqgSCnqbeg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz5T5UYbPEHH-9D4nN4AaABAg.9Aqb381m3LA9Ar5-GmEl1Z","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgwT2ROP1DdiiDKGB6l4AaABAg.9AqZUUWv0ue9AqpOo2jPI2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz7jpaUhfCvthm6InV4AaABAg.9AqY8Qc4i4J9AqnynRtgDh","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_Ugz7jpaUhfCvthm6InV4AaABAg.9AqY8Qc4i4J9Aqps-197w5","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz7jpaUhfCvthm6InV4AaABAg.9AqY8Qc4i4J9Aqsb6nEaVJ","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgxeEqy56N4G3NSRKx54AaABAg.9AqXW5-vjx09AqY26Cf6QA","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxeEqy56N4G3NSRKx54AaABAg.9AqXW5-vjx09AqZS00506D","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxeEqy56N4G3NSRKx54AaABAg.9AqXW5-vjx09Aq_i5kwP5j","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]