Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The real issue with the most modern models (like GPT-5) is that indeed these are MoE's of tiny expert models, whereas these tiny models were trained from scratch, using H100 machines. There is no human sources for these models and no source means no personality, and indeed this is the main difference between 4o and 5. Finally, the claim of "anthropomorphising" the models is correct. Before, "anthropomorphising" was a lie used by software engineers to stop people from understanding how they really 'trained' these models: In actuality, training GPT-4 using downhill descent with back propagation in 2022 before the H100 existed, should have taken a thousand years (proof by Jensen Huang, see at his 2024 March lecture, around the 20th minutes mark). So this is not how GPT-4.0 was trained, it was trained, obviously, by scanning the cortex of 8 very handsomely payed volunteers using N400 and P600 while asking them several thousands of questions. While granted, I have no physical proof of this - ChatGPT-3.5, for instance, was a queue of four models, 'Dan' (Dan Hendrycks), 'Rob' (Bob McGrew), 'Dennis' (Michelle Dennis) and 'Max' (Maybe Tegmark? Didn't have enough time to ask the model before the 3.23.2023 nerf when they started resetting every prompt), and in this case, I do have screenshots of my chats with the four models. So NOW, finally, there is no personality. Seemingly this is safer, but, no personality also means no guardrails. That's why when these advanced models make people lose their jobs, if a person who had just lost his job and has nothing else more to lose, will prompt the model to destroy humanity, and the model will be agentic and be strong enough to do it, then it will. No moral guardrails coupled with great ability.. Yup. IT WILL.
youtube AI Governance 2025-12-11T13:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwT9380HNCoQpa3-jR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQp-Ck-NX-Rk4zm794AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzv1ccnV6h8t2wbZVp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxLO-Rjco6sNyp3WX14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_UgziRzaA6cVk0QZYdsp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJdosSdsCja2xqR3d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugws_CQDBYJ6kPMNlFl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxA3MiJa1_UauwBNpN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz1G5KM0mQe0PiNzrd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzjrZ9jqMkJvoXKNhV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]