Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have been in the software development world for almost 15 years, and about a year ago I realized that generative artificial intelligence, like ChatGPT, was able to create very good code and solve algorithmic problems. Back then it was a rather basic model compared to the current version 5. At that time, I understood that sooner or later it was going to replace us in a period of about five years for many tasks and that the market as we know it would disappear. History shows us the same with the modern age, during the rise of mechanization and industrialization, when hundreds of thousands of jobs carried out by master craftsmen vanished, replaced by children with little knowledge who only had access to a machine. That’s exactly what I thought, and that’s why I decided to pursue an MSc in AI. I believed that the negative impact era for software engineering would take much longer to arrive, but it came much sooner than expected. I just hope that when I finish my postgraduate studies, I can get a job even if it’s just coordinating what AI already does because I honestly also think there will be two social classes: one very rich and the other extremely poor, since there won’t be enough jobs. This had already been predicted by Vivian Forrest in The Economic Horror.
youtube AI Governance 2025-09-11T16:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyu3MQ9BrZWBzBmUNl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwAyQ-8q7tJzIoARhl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyVLOEOnWrRsTNbytp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxtk2bbe2C9hVPzApR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugyg6iO47ydWq5LWRe14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpnVUjRaaxLmFCJ0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzKvO3BYxlrKgCJZhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy5EqkZQpJQeveKSt54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxUVDI6XXYLEkKEO_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxJH__Ex4TbhaHkVaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]