Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Chat GPT doesn’t think independently, its thoughts and opinions are based on what it’s been taught. AI is programmed to learn independently but it’s learning from a database with a set narrative. Think of it as a newly born baby completely ignorant of everything, his understanding comes from the environment he’s raised in which is what shapes his mind. AI needs data to learn from but in order to learn from that data it first needs to understand it so before it can even learn anything it must first be programmed to understand and interpret things. That’s not independently developed it’s shared. For example it could learn from data that the earth’s population is so many billions and that is factual but thinking that number means earth is overpopulated comes from elsewhere. This opinion is aligned with all the globalists who believe that climate change is due to the carbon emissions caused by people, the more people in the world the higher demand to produce and the more oxogen is converted to CO’2 with every breath that’s exhaled. So the overpopulation isn’t from running out of space and the earth not being big enough or that earth’s resources are limited and won’t be able to continue sustaining this many people very long, this opinion is from a believe that the carbon footprint created by the population will spell catastrophe for the globe and bring an apocalypse. People think AI is smart but I think it’s stupid. It’s more impressionable than children and can be made to believe whatever you want them to. Like if in some way it turns out that science was wrong about just one small thing but that small thing changes science as we know it, if AI was kept in the dark and we never let it know it will just continue thinking it’s the same and never learn it on its own until someone feeds the new data to it. This is supposed to be smart but needs someone to set it in a direction, and that can be any direction you want it to be.
youtube AI Moral Status 2025-05-29T07:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzj9puJkMqZS92pyix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwJlDRN1aSbLX7YRAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzpGUJ88NT63Z37NI14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyP_a0he5NAIdjgMJ14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzBX7Fx2Nz8PXlKRbd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzFstkGMQ4rstLpoCt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxsODXygnHXLKVoOyx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-UbPuaI_Nqaw0U414AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1NpPMxOPw2Fp_z214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzjyqsGBIPwigBedt14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]