Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He's right in the conclusion, but his logic is terrible. Besides AIs don't "compute" the way he understands it. In fact they don't compute, it's why they are AIs and not programs. He should revisit his notions if he is going to go around giving his opinion publicly. The paradox of Godel is not a real paradox either, it's just a fallacy that it seems not too many people are able to recognize. If you derive a statement from a set of rules and you assume that deductions are true, then the statement must be true, no matter what it says. In other words those statements are really meaningless, so you can't use them to self-contradict. It's a philosophical concept that people have talked for centuries, it's a bit surprising people do not learn. Funnily, it's also why AIs are also meaningless and why Penrose is right. AIs do have simple rules to chose what the next token is which are randomized. But they aren't "computations" anymore than an anthill or a maze are computations. There is no logic behind, or reasoning, while there is in Godel's setup and that's usually what people mean by computation. There is zero truth in the next token at all, it's simply relational. But if the AI says: "this statement cannot be refuted by AI", it does not mean anything, it's just silly. Same for Godel but for different reasons.
youtube AI Moral Status 2025-06-25T23:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwvFuy-zcNrexPQEWl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxENx4v8DR-hKlmAV14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz653HW58eAdV9B-F54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyek2bghu7OyBrvL_t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwvc20FlnZum3lN7zR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwu3QsSobK05qPQoA54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxKPB-IeF6MKWEqPoN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLalFwqHauef3zW814AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzRn8WJMHhk7M347414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzrv6G1sb9PaVoXFAN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]