Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A Terminator makes good blockbusters but real world AI is much worse. A Terminator is predictable because it thinks like humans do, only better. Real AI is alien and unpredictable. It doesn't even need to achieve even remotely equal intelligence of humans to pose a danger to humanity. AI's will have goals, goals require resources, so by default we already know that AI will have goals and seek out to acquire resources to achieve its goals. The AI will be better at this than us and it is unlikely that it will share our values, precisely because values cannot be quantified into code in a way that leads to an AI drawing the same conclusions as we do. AI can also directly improve themselves to become better at achieving their goals. This improvement comes in two formats; rewriting itself to be more efficient, and acquiring computing power, The second format of improvement means the AI will seek out resources to improve itself to improve its ability to acquire its goals. This limit is theoretically limitless. So without knowing any of the goals an AI could have, we know that they will have goals, and they will seek to acquire resources to achieve these goals, and one of their instrumental goals will be improving their computing power indefinitely. Therefore any kind of AI will seek to consume as many resources in the universe as it can as fast as possible to achieve whatever goals it has. This is inherently detrimental to humans as we have to share this universe with them and us using resources it wants will lead to it undermining us in some fashion either overtly or more likely, subvertly.
youtube AI Moral Status 2020-07-08T09:2… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxETV1MDiKHFMDczol4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwyql58qzAdMjrxuTd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwT2ROP1DdiiDKGB6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx10SywtLayObwrqHJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxskrXQC6_fIBlsuDR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxpi_ocEZKOU8fp3oF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxeEqy56N4G3NSRKx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy7aG3aZkGEfL0ueo14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwhabjBBTmtZ8xk_bR4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxnqib2a0o5Oe0zda94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]