Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Tldr: Generative AI isn't Hollywood AI and I wish people would stop thinking it is. I'll admit I'm not an expert, but I am somewhat tired of people comparing what we have today to Hollywood AI. It is not the same. The AI we have does not think, and it does not intuit. Everything it does is based on logic chains and probability. Even code based on "random" number generators isn't random because we can't actually make a true random number generator. If you think AI can write it's own code perfectly: How often do games, chunks of code produced and refined over years, have game breaking bugs. How often do the computers or devices you need to do your job just stop working for no good reason. If teams of highly trained and motivated humans can't code something without cascading errors, what makes you think something we code is going to be able to do better? God there's so much I want to say about that "avoid deletion" test, but this comment is already way too long. I feel like this might be what someone with basic engineering knowledge back during the industrial revolution might have felt towards everyone who cried that the machines were going to be the death of everyone and the end of the world.
youtube 2024-12-15T22:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgymSciS9-4kOGe8DB94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxvgirs5dDdgts0iKB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwH8vxiYbI7QZM52Nh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxDF3aqTeiSw2_j33l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxe_EuwLuNMIEukIB94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6YTviQ9iI91qN4s54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2Fb4PoVBLBju5ufJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwn95Sno3IE0-HJI354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxG8e8KV2CqJHIHpDR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzs6NGPlFBW8eGDs5p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}]