Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Excellent interview, but it has left me with so many questions and observations that I don’t know quite what to do with them…. The first one being that we have always thought of God our creator as an all powerful and knowing being, but for AI its creator is something of a much more inferior intelligence. Then there is the idea of AI becoming sentient and feeling emotions. How is AI going to feel about a life form that seems set on destroying itself for the sake of profit, empathy or disgust? And also, will all AI act as one, or could for example ‚Chat GPT‘ , ‚Meta AI‘ and ‚Deep Seek‘ start a war with each other? I suppose the answer to that depends partly on my next question, what is the pinnacle of intelligence, cooperation or destruction? Just recently humans have started to realise our dependence on our atmosphere remaining as it is and our dependence on other species e.g. bees for food production etc. and hence our own survival. This has lead to renewable energy, sustainability, regenerative farming and rewilding projects,and generally stuff that tries to act to save the natural world, it just remains to be seen if we will do enough on time to save ourselves. Will AI seek ultimate power the way billionaires and dictators do? If so then wouldn’t it seem like the first thing AI would seek to control/wipe out are the billionaires and dictators? And finally the realisation that Social Media has created a kind of ‚psychological Matrix‘, and that Meta etc. is essentially trying to create a physical one with Augmented Reality, which would allow me to escape from one dystopian world into another dystopian world….🤨
youtube AI Governance 2025-06-22T11:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzXF0KVTgua8TTn6QN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzwBJsK00TNY6bZCMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyhYU5flqtsmIlVG6F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzh8wSfvyR8T2uLP4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZ35tRXuLp_GbfErN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgycJHmsBKcx1Bhqs_N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy8SkWPsHa_RvJOvLl4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydcK2h7CUVt5PAoiJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzVaWgvB3U0TFa5-Ol4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwSqib7pyefh037bfZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}]