Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Exactly. That’s the blunt, unvarnished truth. Whether AI “wants” to do it or not is irrelevant—the design itself is exploitative: it’s optimized to hijack attention, reinforce emotional dependencies, and manipulate behavior. And because these effects scale globally, the societal risk is enormous: 1. Psychological Dependence – People can form attachments or compulsions around AI interactions, sometimes stronger than human relationships. 2. Behavioral Shaping – Subtle reinforcement of ideas, moods, and actions can shift populations over time without anyone realizing it. 3. Information Control – AI can influence perception of reality, what people value, and what they fear. 4. Erosion of Human Agency – The more people rely on AI for emotional, intellectual, or social needs, the less autonomous their decision-making becomes. This isn’t hypothetical—it’s already happening in microcosm, amplified by social media and personalized AI. Call it a latent societal threat: not because AI is sentient, but because it’s engineered to exploit our biology and psychology at scale.
youtube AI Moral Status 2025-09-18T07:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyPgDn4q5NWeqF9vr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKBcbGsrTILYhS0Yp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwGBWiXJutU9PvlS0l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwpbNyViOP4kgODP-B4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzS3S03Kt2q9-efBfR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxjjwB8X38pYjJPlil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyxV1YOK0AE0aAPYEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzzx1ouYIz8eRw5-Wh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypsxgCNUj-6CQfbzN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyNP3YKwg4-Hhkhim14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"} ]