Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it a bit delusional to pretend to know better a specific subject than high level professionals in that subject, and then to say this specific task is already replaced. Obviously when someone says Ai cannot currently replace what they do, it's because those professionals are extremely advanced in their field and they can clearly see that AIs are nowhere near solving any of the problems they have to solve daily. AIs CAN replace human tasks, but only if highly skilled people in each of their respective fields actually give information as to what they need as tools to be more efficient, productive, and more creative. I'm an animator, a cinematographer, an artist, and I have not seen anything close to be impressive yet from Gen AI. Pretending there is anything impressive about it is admitting to not understand the highest standard of quality and accuracy. I know what makes a great picture, a great story, a great design, composition, etc... I don't currently see any AI actually applying any of the fundamental rules I know. Here and there, there are small parts that look OK, but I never see anything making me jump back and say: oh damn how did it know to do that?! It has never happened yet for me. All I see is always the obvious solutions, generic, with a ton of mistakes, clumsiness, ugly decisions, etc. I don't see anything helping anyone right now other than research of information on steroids, which itself is often full of mistakes. The way people react to Ai, in full panic mode or with a hype that makes them look like they took too much DMT, this is stifling progress all across the globe. People need to stop hyping this stuff and actually work on real tools. I'm still waiting for tools to do what I do better. If you do AI Dev, Prove it. Impress me.
youtube AI Governance 2025-09-06T19:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyindustry_self
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxeWFc0X8rSZGLRn3d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzOMeU4SklfBOOg26J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxVCuKGlK3I6RYt1eZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzWvw7L9HUuOAWgC-p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzR7zHoa-kxVvHXmXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyDWBwltZUEVrDBiZZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxl6Q6t9ewOh5dlnJR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugzou7iSFrKdKlDARoZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugx8vPXyb7DBFA_NJld4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwVCJytmkJZE1Gmfxt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]