Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Another issue that will no doubt come up in the future is one of trust, not necessarily "Do I trust this AI over Fred the Programmer?" but rather "Do I trust this AI to create software without any hidden extras?" as with the proliferation of AI-programmers we will no doubt see a slow proliferation of such programmers that are purposely designed to 'alter' their work towards the betterment of a third-party. This could come in the annoying form of targeted advertisements for a companies competitor popping up in their customers web browsers.....or it might take the form of a hidden backdoor into security systems, financial software or maybe even operating systems. Granted with programmers who are on the ball and checking over this code that shouldn't happen but....well.....look at those companies (like game studios) that have released buggy/incomplete software and now wonder what would happen if they did the same with such software from an AI because someone high up didn't want to go through the time&expense of verifying the codes integrity. If you don't exactly who owns or is running the AI generating your code then you may very well be leaving yourself open to a disaster.
youtube AI Jobs 2024-01-16T18:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugy4Kl_BFPWk6x6vsft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyyMXh4Yc7-jEoI3bx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwYn95p4i0CrxUclOF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-EwaIH2eDRwB-zIp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzPuAazxVyoHiFz02d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwpa9BhwzlWhVf0cg54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwOd0LTMuhntZMl2114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6Pig0XQsTvUIIs6J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwCTTVYkYZHMSbmmFR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzCyRON_2Togg6CjVp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]