Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He seems to say that he doesn't think we could create a conscious mind unless we understand what that is beforehand. But the universe already created consciousness, and I don't think the universe knew what it was doing when it did. I don't think AI would ever be able to understand or think in any meaningful sense of the word, if AI was only if and then statements. But it is not only that anymore. Now ai systems 'learn' relationships between many, many, different subjects and how they all relate to each other. They create a 'world model'. And we don't entirely understand how it works. These neural nets might actually do some real thinking and understanding. If they don't, it's so close to being that, that I don't see any reason to not consider it essentially the same type of thing, but an alien, exotic mind. With different strengths and weaknesses. But human minds, architecture wise, is not really changing. But for ai, it is. And unlike in evolution; this time we have intelligent design. When people don't believe ai will one day be smarter than them, if technology continues, is I think an example of their inability to predict the future. We are standing still, but AI is not.
youtube AI Responsibility 2026-03-20T01:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx_w3gupnaqWxLCw-54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyMTdEYhGDBa1U-i2R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzEyDLZwR3e8NGg02B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgycR16IkMQHLlVk2hl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNHo5KnZeRQRmEvBx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyVyM2sxpbwudFfblh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugykc3RO2ljbzK2eCuZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzzyz0fFqHEZ-g5Trd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxuk6MUPDWc1dWHonN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwaH5y5I8JVz2_Uish4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"} ]