Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was all predicted years ago yet we listen more to visionaries with outlandish promise (and companies wanting to make a massive profit). AI can generate code, make a web page or some simple software very easily however AI is not "intelligent", it literally states as one of the 2 letters it is "Artificial" so it cannot do what humans do and will never do what we do and that is make connections to solutions that do not exist. Humans can solve problems that do not exist and work backwards for a solution which is impossible for a machine to do. A human can also use not just training but intuition to know that there may be 3 ways to do the same task but only 1 way to not just do it reliably but also to fit with other processes not yet developed. AI is just advanced problem solving using algorithms. You present a problem and it simply breaks that down into a number and tries to find a match for it which is essentially just a rank on the best possible solution. For some things this is an easy match but may not be the best match. The big concern I have personally is all this will be brushed off and with all the investment in this, companies will force things to work and begin taylouring their business to fit the capabilities of AI, not humans and we will be left in a world where things work good enough but not great. The question we should all be asking is, who is any of this benefiting because it seems less about people and more about corporations and governments who reap any rewards. Only thing I see most people produce is lower quality and lazy output especially around media creation now...
youtube AI Jobs 2026-02-04T00:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzQ33uQHDuqbRTs0Xl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxnuDWs3LNKYUTYNa14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwpTKjUxAJMDr7J9a14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz47lmGgJIepfZYdat4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwdouWYlroI5dicLfp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoPC_RylaHqtszkGx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxaTOo-G1YRVFcaRBZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzmpjmuNotCfcllS794AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTNMuHg-M0W6HklLp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwSFZW0OH1ioWEa4kV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]