Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Nice, I can envision that we don't have any programming language at all anymore at a point. As those are abstraction for us, humans. So we can understand what we do. Though some of us do look into the machine code as we debug. But we don't write in it, normally, for a reason. I guess an AI don't have to do that, if you train it right. Though you would need a tool to read the code it writes for human eye. But we already have that. For example you could take drivers, they could be writen by giving the schematics of the hardware as an input to an AI, if trained right. For in the end, right now we use it as a tool to handle our abstract langauges. If trained it for the special perpose of taking input and giving machine code. We may have an interesting AI to work with... But hey what do I know, I am not some one with deep knowledge in this. So I am not saying this is even feasible. I am only saying that we shouldn't think that what we need to be good at programming, is the same thing the AI I need to be good at for becoming good at programming. Sry, for my english spelling, not my strongest suite.
youtube AI Jobs 2024-01-14T20:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxrL_kq297o-upcJwl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxLOn8aj2l0nZODrwt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzQ_7RCoEKTwVGthzp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzVHUaJqbFYuXYmWQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzEqpGgCIpe2bG_Utt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWwS2fvK8DEiVQH_14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugyh8M9hsJr3ZMuFDkB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxOyeHKCbpbGBHGl1R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzzK7s9iI3gshTotcd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzAc8vltdb3Nb8Q_1x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]