Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's sad for someone so smart to miss the point completely. AI is not an alien species. It's an algorithm that runs on very expensive and fragile hardware. It doesn't feel, it doesn't need, it doesn't want, it cannot reproduce naturally by itself. Even if ASI is achieved, it's assuming too much that it will want to overpower us, and that it will be capable of it. Why? And how? Being intelligent is not enough. We live in a physical world, where moving things requires physical contact, work, energy, and many times the opposition of bunches of us pesky unaligned humans, with all the military and weapons that humans love to produce. The only real danger that should concern humanity is humanity itself. Ai is a tool. It has no "intention" other than what humans program it to do. We decide what is selected for in its evolution, based on the data we feed it. We decide, always. Money decides, political power, armies... not intelligence. Not by itself. Humans are constantly at war, that should be orders of magnitude more concerning, but the wars continue, and that doesn't seem to be a problem as alarming. It's quite hypocritical and out of touch. These ai intellectials have drank their own coolaid and now believe in their own delusion that they have created an alien species that will want to exterminate us. They think it's more dangerous than the nuclear bomb. Come on. Give me a break.
youtube AI Governance 2025-11-13T12:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx28842LGnhrDJHlJl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxnfTRINZ65SjUcDap4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwlc85ijUfQU2mN2St4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyojndqyzVgp5xv3GV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwM5iiU04ZJXag1L4t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyDzJPsqn3NZrq7awN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz4UYUwBlGdXWop8AB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwKsyxfOmwtUjIWlWR4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzGRht6dUkMNO57yhR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxOeAwCVjrQfn5FLu54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]