Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the Presenter has an outdated view of AI. As an AI developer, some of the terms he discussed - Iv never heard before - ... I think he was making up half of these. 'Explainable AI' ? (Wow he has a sense of humor) AI models are trained as to what you need them for ... (therefore suggesting an infinite number of types of AI) - but remember these are static... and continuously have to be updated towards the direction of what is needed. Some reviews below suggest that all corporates should be taught what this video has proposed... I think not ! There are a number of issues with the presenter's interpretations, like suggesting that AI learns on the fly and keeps improving - This can only happen if its being retrained on the new data. Try opening a new session after telling an AI to remember something - It will have forgotten. He further misinterpreted - what was coined as 'the Singularity' - its not the logarithmic learning - but rather the Point of crossing where AI becomes more intelligent than humans. AI is developing all the time - But at the moment is only able to refect on what its Learnt from history - and only can retrospectively appear intelligent. This is because when trained - the Model is Static - and not changeable unless trained further.... Some developers are investigating using Quantum computers - to continuously change the 'Weightings' on the fly - much like a human does when assessing a problem - and new information comes to light... Here is where its proposed that generative AI will evolve.... However his presentation is fine for kids under 12 ~ Thanks for very very basic understanding tho -
youtube AI Governance 2025-07-23T19:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgzH39WGTZ5x5rGmom94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwTcxbSuzwS_zr6UtB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyLWVUzysL_wV0BrdN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz_PbI-ClrHIfMme_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwBLuMSRaTxlCArm-N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"disappointment"}, {"id":"ytc_UgzXZmcYmgZD9Otsf654AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzmO0fUdaYknWkS-wN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugwu_bpMTdSI7zrT8GJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugymzu2QJnrqkRfz6Nl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw937OhrcwWQW8N5L94AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"fear"})