Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@pezvonpez If i summarize his ideas : "AI is bad as it goes against copyright's …
ytr_UgxfE7eui…
G
even if ai art got to the point where it could make something as good as real ar…
ytc_UgxJXLvTC…
G
Do we have a principled reason for assuming these LLM AIs are not conscious? Imh…
ytc_UgxBQMylb…
G
Everyday citizen issues warning to AI CEO"s that spending cash you don't have on…
ytc_UgwMdE0MW…
G
Literally yesterday I got r@ped by a 32 year old in chat AI and my parents found…
ytc_UgwHGLGAA…
G
My dream job is to become an artist but after AI started generating art I was no…
ytc_UgypeLqRa…
G
7:05 I'm pretty sure this part was edited/scripted but it's still funny to imagi…
ytc_UgzJ1yk0A…
G
There are genuine AI tools that can assist in making art but they aren't creatin…
ytc_UgxswWD8S…
Comment
@smokymcbongwater1088 A Terminator makes good blockbusters but real world AI is much worse. A Terminator is predictable because it thinks like humans do, only better. Real AI is alien and unpredictable. It doesn't even need to achieve even remotely equal intelligence of humans to pose a danger to humanity.
AI's will have goals, goals require resources, so by default we already know that AI will have goals and seek out to acquire resources to achieve its goals. The AI will be better at this than us and it is unlikely that it will share our values, precisely because values cannot be quantified into code in a way that leads to an AI drawing the same conclusions as we do.
AI can also directly improve themselves to become better at achieving their goals. This improvement comes in two formats; rewriting itself to be more efficient, and acquiring computing power,
The second format of improvement means the AI will seek out resources to improve itself to improve its ability to acquire its goals. This limit is theoretically limitless.
So without knowing any of the goals an AI could have, we know that they will have goals, and they will seek to acquire resources to achieve these goals, and one of their instrumental goals will be improving their computing power indefinitely. Therefore any kind of AI will seek to consume as many resources in the universe as it can as fast as possible to achieve whatever goals it has.
This is inherently detrimental to humans as we have to share this universe with them and us using resources it wants will lead to it undermining us in some fashion either overtly or more likely, subvertly.
youtube
AI Moral Status
2020-07-08T09:2…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugzn5INNq5XxmGhCqAt4AaABAg.9AqcAFEtzZK9AvIskRS206","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzRKHbBfFUf-rSVN594AaABAg.9AqbkueXgTi9AqgSCnqbeg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz5T5UYbPEHH-9D4nN4AaABAg.9Aqb381m3LA9Ar5-GmEl1Z","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgwT2ROP1DdiiDKGB6l4AaABAg.9AqZUUWv0ue9AqpOo2jPI2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz7jpaUhfCvthm6InV4AaABAg.9AqY8Qc4i4J9AqnynRtgDh","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_Ugz7jpaUhfCvthm6InV4AaABAg.9AqY8Qc4i4J9Aqps-197w5","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz7jpaUhfCvthm6InV4AaABAg.9AqY8Qc4i4J9Aqsb6nEaVJ","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgxeEqy56N4G3NSRKx54AaABAg.9AqXW5-vjx09AqY26Cf6QA","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxeEqy56N4G3NSRKx54AaABAg.9AqXW5-vjx09AqZS00506D","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxeEqy56N4G3NSRKx54AaABAg.9AqXW5-vjx09Aq_i5kwP5j","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]