Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Interesting video. I was unaware of the bias possible in AI and honestly this is all very scary ! There is a lot of in-built racism and sexism in many industries/companies when hiring. If they decide to choose a biased AI model that selects potential new hires based on certain criteria, how will we ever kbow or be able to proove it? It will further legitimise their already recist or sexist behaviour. When this expert says we can decide the road we can do down (ie we can make AI a positive experience), in my opinion this is nonsense! There will not be a road, there will be many many roads and all it takes is for a few crazy geniuses (plenty of them in the world) to set their AI models to a destructive path. Ultimately AI is a machine that learns, so it is INEVITABLE that at some point (20 years, 50, 100 from now, who knows) AI will become more intelligent than humans (scary but true). Once AI hits that point all it will take is ONE BAD CRAZY GENIUS to program into their AI model, one ultimate rule, and the compliance with this one rule will unfortunately cause the destruction of all of humanity. This rule is "DO ANYTHING IN YOUR POWER TO PRESERVE THIS PLANET". AI will quickly realise that to save the planet it has to destroy the cause of Earth's problems that are ruining it i.e. us humans. AI will take control of nuclear codes of any one country and set the ball rolling and us humans will destroy each other (very scary).
youtube AI Responsibility 2023-12-19T23:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyNU8I06ZnNE_yuJZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugx0YHpeMzYTM8QdrBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwnlv2o27S95tu1bsh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyVxNURJ4AHCe17b4N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzdSOtJdIfy888vN94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzLX7lKAkjKyZE9Q4h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxwgI4KGrHPdruk-0t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxp09V4baSG-7JIv8Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_UgwSVsegK6_8xJT771J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjfOOYVxDR-MIiJzB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]