Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1. Is there a fundamental difference between a meat computer and a silicon-based computer? 2. Is there some fundamental limit on how intelligent an intelligence can be, and if so, are we at that limit? If your answers to 1 and 2 are no, then superintelligence is possible. If your answer to 1 is yes, and to 2 it's no, then we can just build a meat computer instead - and superintelligence is possible. There's really no good reason to think that superintelligence isn't possible or even likely. There are clearly some humans who are far more intelligent than others in at least certain areas... why couldn't a machine do that? That indicates it's not necessarily a difference in kind ("normal" vs. "super" intelligence). If the thought is that our current methods won't get us there, fine, but that doesn't preclude us changing our methods; or AI changing its own methods. Without a fundamental limit on intelligence, if we continue to work at it without going extinct first, I think getting there at some point is highly likely. And whether it's in 2 years or 200 years the same dangers remain.
youtube AI Moral Status 2025-11-01T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzr2YCzK_q4lkbIxJB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy3v0D148fR819XJk54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0bErxdWkj30RXj6N4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgO7pSfi5-FgMGhVB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGHYD_S9emAirkbLd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxrZKpezA8USTxo5JJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyjjDyJr2Wnl_iV1Z94AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwP7erQ0nFSth-1c9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzEgOovjolxEbt9hSB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzsyqQA4rZGQtp73eh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"} ]