Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't agree necessarily with how Mergrethe has characterised the 'existential' risks, as she's made it more about inconvenience and inappropriateness of the decision-making. If the software operates in a medical sense and isn't able to extrapolate suitable treatments based upon your sex, then it's not good software and shouldn't have been deployed. If the software is picking and choosing who can have a mortgage despite affordability criteria being met, then it's not in a deployable state. All things being equal, if you don't get a mortgage then there will be other factors outside of affordability. The bigger problem is much smaller: If the AI is better at our jobs than us, what reason will employers have to retain staff who do a mediocre job and cost a lot of money over an AI system that does the job of an office for a fraction of the cost? Literally none. And there's nothing wrong in that idea as we have been replacing people with machines for a very long time. That's the problem, and she hasn't addressed the need for Human-AI competition laws to recognise how much we will need to compete with AI to certify ourselves as valuable.
youtube AI Governance 2024-05-24T07:4… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxlXsEy-Wbbv8z0e7d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugz3KqTxpJWU6jzzA2l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy93gzrB7HF4T172id4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyyLUlEt2cfeh4WgjB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwYjdrcEzFkaHNj-1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw_B9ZgQoyFtLKkJfl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwhp5CF0JxU_0MOmpZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw2GUsHK1SSKTO8JMB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxlTd2oFU2oiR5eNl14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyTprMMFMJyQ9dx9jh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]