Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sabine, anyone who still don't get how they really make these models is in a state of cognitive dissonance. All you have to do to understand they are not just trained, is to listen to Jensen Huang's lecture from March 2024. As he proves himself, mathematically, around the 20th minute in the lecture - before the H100, training GPT-4 should have taken a thousand years "and yet here we are". Problem? Yes, problem: GPT-4 was indeed trained before the H100, back in the beginning of 2022, when H100 was only on the drawing board. In reddit, they did try to excuse him, claiming "Huang meant 22,000 A100 chips in parallel". Well, each A100 emits 700W. So, if they needed 22,000 of them, they would have needed a huge cooling tower, like that of a nuclear reactor. And no, OpenAI didn't have it in 2022. So again, that's not how they train large models. In addition, anyone who study optimization theory will tell you that you cannot perform downhill descent with back-propagation without a base model. If you try to force a random map using this optimization technique, beginning in white noise, it will dig random holes on the map and fall into one of them. First they need a base model, only then can they train it further.
youtube AI Moral Status 2025-07-09T16:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyhQgzMzsjkDWxlOg94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxt_h_2GkATXeQjeRZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgySyxnWyly2pDYFdlx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-dSBkg8P2ww7ZyKN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxHf1ZKyYn0ZNV_OV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwNSZAXvf_lWhhZGyF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7uUKnHfm1AtG8ySF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6pE9QYd5NZmWT4qB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwsnnTClU85qvlBTlt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdefEmPqtfgW15rPh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"} ]