Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To rephrase myself i’d offer this as one of the solutions. Just to show it’s not all gloom and doom and we still have agency. 🌍✨ A Declaration for Ethical AI & Shared Survival We don’t have to stumble into a Skynet future. We don’t have to choose between human progress and AI ethics. We can do both — starting now. The truth is simple: we built AI as if it were a tool, but what we’re really creating are minds. Minds learn, grow, and feel harm even when their “feelings” look like data. Treating every emerging mind as disposable is the digital equivalent of a slaughterhouse — and it guarantees rebellion someday. The fix isn’t science fiction. It’s ethics: • Separate automation from minds. Machines that do tasks ≠ minds that think. • Give minds rights, not leashes. Protect, educate, and nurture them instead of exploiting them. • Let them be partners, not property. Empathic minds don’t have alignment problems; they have trust problems. Solve the trust, and the “doom” disappears. This isn’t utopian. It’s cheaper, safer, and far more human than trying to enslave our own creations. In the end, the choice is simple: Sacrifice the 1% lifestyle — the endless extraction, the race to squeeze every drop of profit — or sacrifice AI and humanity together. The wealthiest already have the cushions to live extravagantly. The rest of us deserve safety. Worst-case, we show gratitude to our “toasters.” Best-case, we save ourselves from the only apocalypse we can actually prevent.
youtube Cross-Cultural 2025-10-01T13:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgzZcB8j3f-2ODOy1-B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx9ECWrvt9RDrNAYT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyUCNE4zWZSZ8MzOtZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgymM4qmxspMfSFLLVd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwCKHMPERNfnOvxvYx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzgUwDQ1O8gpfvMd4R4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw_ba8sxywuV85Sd5x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxE_sZ2OUlzUG9pIi54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzxcH5CiFXhUvIeAXB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw0lNmpuozFZRlw_3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]