Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So 35 min in Hinton says Elon Musk doesnt have a moral compass, but when Steven asks what about Sam Altman does Hinton pauses and answers he does not know 😂 what ECHO Chamber has this professor been living in .. Elon is the single greatest proponent on the dangers of AI , and Elon vehemently wanted to keep open AI a NON PROFIT .. Altman saw billions and went for it .. this wanker Hinton talks several times about the dangers of AI in For Profit corporations (as legally bound to make profits" he states) , exactly why Elon did not want open AI as a for profit and he understands the risks to humanity, as does Hinton - yet Hinton thinks Elon has no moral compass, wow what a total hypocrite!
youtube AI Governance 2025-09-29T19:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxDphRRIpVFYVsv3q94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzOOs43BFWmbgMFd54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzRklTpdRcr-j-RQQx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweTT3x1hocHuHGTCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyrmxjwg5AyLN4HFyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxl2vBOaw8c5AAqN2J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwf_OoEELlzpyTFVjZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzhAmX8ul0ZwE3Df6p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx4i7muG0X6Pth7vNJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzMELLnzI7OEvN_Yy94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]