Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You cannot fine tune AI with 100 human questions, you need milions of examples to put a dent in it. Creating question answer pairs is useless because it does not learn from you, none of the models do and even if it did you'd get massive issues, bugs, misunderstandings, the lack of inference. It'd be a mess. To train a model you need a LOT of training data, millions or billions of datapoints and a whole team to troubleshoot the issues they create. It's not easy or cheap. The original chat gpt could not do a job. The AI can answer 1000 questions correctly, the problem is when you go off script, it can't adapt to that. If you're using it in support for example the customer might go off topic and the AI might lose its train of thought or just mess it up completely. Also no one wants to use AI or talk to it. You look like a fairly inteligent dude. When you talk to the AI, you have your own style of questioning, you're not asking blindly, you're having a conversation, back and forth. This is not true for everyone, many people simply want a solution to their problem and do not actually participate in reaching this solution, that's where humans work and AI fails. White collar jobs usually mean talking to people, interacting with people and doing a service for people. People do not even want to talk to AI. Respectfully I am going to stop watching your video half through because, you're just dead wrong from the begining, you're focusing on the positives when you should be focusing on the problems and only when you run out of problems to fix and it seems acceptable should you draw a conclusion, your video starts with the conclusion so it's fundamentally flawed. No offense meant.
youtube AI Jobs 2026-02-27T14:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzQY80jubdIhlOWsyh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxGmhRGcsIcL3DDn-p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzz6-YzGgYuHb176s94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyiWM1_r1dM3nRwqpl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXPosvA_He5mbKhzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz_2yChHdufz3Vwx5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw3AGY45JwbeEwGhWV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzY0fD2fBzXo7MzC3x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxzoT13qpEjHdIkpB54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz6b6lsmjxMCABGfqh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]