Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@canned_kuchie4724 If we ban porn deepfakes we have to ban all deepfakes. This …
ytr_Ugx6yU1av…
G
This will definitely bring out lot of potential and curiosity from introvert kid…
ytc_UgwSpNiUS…
G
Ok so AI dominance is inevitable but bitcoin is safe because Quantum computing i…
ytc_UgzW7Sr7p…
G
There are so many useful things that AI tech could be useful for, but creativity…
ytc_UgxI5BYB6…
G
the thing is that there is no intention behind ai art,
anyone can make or do so…
ytr_UgyXHPwB1…
G
I agree with the Doctor when he says we "don't need super intelligence." I perso…
ytc_UgznlOnMI…
G
There will still be markets around the world that remain unaffected by replaceme…
ytr_UgyPEXKkY…
G
Why is an artist being reported, oh, are you one of those, one who thinks
Artif…
ytr_UgxJt9BsN…
Comment
As a non-believer in the AI hype, I think that the issue actually is sci-fi. The problem is that I do not really like the anthromorphization: claiming that the problem is "superintelligence" suggests that we'll have to deal with a sentient artificial being that might have some actual intent to harm humanity. As of now, that is sci-fi and that is that. On the other hand, the fact that an AI may misunderstand a command and/or conclude that an harmful course of actions is the best way to achieve the given goal, that is actually plausible. However, this scenario does not require "superintelligence", it does not even require AI at all, because it is actually a problem of all automated systems. Any program may encode unpredictable, unintended behaviors, that may end up having very severe consequences. The additional problem with AIs (neural networks, to be more specific) is that their decision process is not human-readable, which makes debugging extra difficult. In general, I think that the way to deal with this issue is "simply" to use technology wisely: do not automate fully (or at all) crucial processes, have some protocol in place to deal with errors, put guardrails around the system so as to minimize damages. That sort of stuff. Just to be clear, this is an interesting and important topic of discussion, it's just that it doesn't need to feed the hype.
youtube
AI Moral Status
2025-10-30T19:3…
♥ 149
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz7To3N3bTqWHRXAWd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzg3My9h6MiHmdkDD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzS6P_qp6JJzzMBB394AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLgdhp4_xZ5n82po54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMJlOHwQNVVDW5kz14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMu7jkPZ781oZvapV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugxo6c3EvZkZGen8eaN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0MG1VkiFCZxQxg794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx3nSuDFDjpcBaDBdF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUrlFSrmKEOxF9n-N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]