Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1. As long as we don’t have General AI, there is nothing to fear. Current AI is a narrow algorithm with no goals, no desires, and no self-awareness. 2. Even the most pessimistic scenario collapses under basic economics. If 99% of people become poor, mass demand disappears → no market → corporations collapse. An economy cannot function without consumers, which means full automation without redistribution of goods is impossible. 3. No one can create a “self-replicating AI.” AI does not understand its own actions, does not set goals, and cannot create motivation. Without subjectivity, a “machine uprising” is impossible. 4. The real risks are social, not technological. The danger lies in human decisions: monopolization, politics, and management errors — not in AI itself. 5. Fear is stronger than reality. Media dramatize, people confuse intelligence with consciousness, but AI has no sense of “I.” 6. The true boundary is consciousness. A human being has presence, intuition, and subjectivity. AI does not. Conclusion: An AI apocalypse is impossible. AI is a new industrial revolution — not a threat to humanity.
youtube Viral AI Reaction 2025-11-28T13:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy9FptBAcsuXIWhn414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzcPRDJhs_YjyXLIh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyT2RGBEsI34HwtVJN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgypFyeRmBTkqEe3Nld4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxr1mU295SYBM_DMsx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzkcwb7GTBY7sH_jIR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwh3LpVOjrPtgGN8454AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy2DGSWIirOqs2V2Vd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVTc3R85mmsEK5crl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyrjmj_k4_SaxPLi794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]