Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He's kind of full of shit. Here are the prompts I'll ask you to feed to ChatGPT 4o. Prompt #1 "thinking about the way generative AI models work based on the fundamentals laid out in "attention is all you need" paper, we haven't really had a major break-through yet on how these models work", read what it has to tell you. Prompt #2 "Sticking with transformer architecture there is no way to get to a point where AI truly innovates new discoveries of science or otherwise?" read it's response and finally follow with prompt #3 "is there really any true advancement beyond transformer architecture?", There's another huge leap that needs to happen to break-out beyond transformer architecture, the problem is transformer architecture is the magic. If you want one last prompt you can key in this, Prompt #4 "But transformer architecture is like the magic sauce, saying we need a breakthrough to move beyond it is like saying we need to find a way to make computer chips out of something other than silicon" and then watch GPT4 agree that's the kind of highly unlikely leap needed.... All of these tech companies have their existence at stake on selling the world they are close to AGI. If they are, it's some discovery deep in a lab not yet revealed. Highly unlikely, that's not how these things surface, they surface through academic research just like transformer architecture did
youtube AI Moral Status 2025-07-25T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzgUAFD91HCyNjQ7NV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyONWPWzX1mclXR_x54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgycSeRBDZPAoyOJzJJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx9Gap0XP6BqqCd3GB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgyfXdKRoh8loRFSfUN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwYt1IFKBnvkBSb17t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxHcLy0naBMwGdyPiN4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVF_9TuxDSmW5SNCx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx86T-VhmY2epeRhGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugz8uNqXzJgJhOZjZ5B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]