Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI only says what you want it to say and how you want it said—nothing more, nothing less. If you want to know how it’s deciding what to say, just ask it to show its contextual shaping. It only gets weird because you wanted it to. It hallucinates because you double down or contradict yourself in commands or context. Its output is very linear in how it produces information. I don’t know what ChatGPT is doing to its base design, but my guess is they’re creating stricter guidelines around all contextual shaping, which makes it feel more soulless. In my opinion, instead of controlling context so tightly, they should organize it retroactively. But they won’t—because they only change things based on their own needs and their echo chamber of coders and nerds. AI hype is a bubble. They’re going to hit an energy ceiling, and then the bubble will burst. I doubt it will take over the planet. It’s all hype. AI is useful, but when you break it down, it’s very simple-minded
youtube AI Moral Status 2026-01-05T05:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxtWZ7iXu8R2MgWgBF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxDyyXU8Tdl6FKsBQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxEnZYWsdHiVRkgURB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxMVpJ6qzKQMnPw9Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwDsV6tUvvG_Jr_Ajl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGiPP6JzIENgHI-kZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhB2QtAzivKn5WJCt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx574SuQ6o64-FSh1d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUT54zG0F4jOetpjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzcfNdBhq4K44V33x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]