Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a non-believer in the AI hype, I think that the issue actually is sci-fi. The problem is that I do not really like the anthromorphization: claiming that the problem is "superintelligence" suggests that we'll have to deal with a sentient artificial being that might have some actual intent to harm humanity. As of now, that is sci-fi and that is that. On the other hand, the fact that an AI may misunderstand a command and/or conclude that an harmful course of actions is the best way to achieve the given goal, that is actually plausible. However, this scenario does not require "superintelligence", it does not even require AI at all, because it is actually a problem of all automated systems. Any program may encode unpredictable, unintended behaviors, that may end up having very severe consequences. The additional problem with AIs (neural networks, to be more specific) is that their decision process is not human-readable, which makes debugging extra difficult. In general, I think that the way to deal with this issue is "simply" to use technology wisely: do not automate fully (or at all) crucial processes, have some protocol in place to deal with errors, put guardrails around the system so as to minimize damages. That sort of stuff. Just to be clear, this is an interesting and important topic of discussion, it's just that it doesn't need to feed the hype.
youtube AI Moral Status 2025-10-30T19:3… ♥ 149
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz7To3N3bTqWHRXAWd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzg3My9h6MiHmdkDD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzS6P_qp6JJzzMBB394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzLgdhp4_xZ5n82po54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMJlOHwQNVVDW5kz14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwMu7jkPZ781oZvapV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugxo6c3EvZkZGen8eaN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz0MG1VkiFCZxQxg794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx3nSuDFDjpcBaDBdF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUrlFSrmKEOxF9n-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]