Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
She has some points that there are different kinds of issues in many contexts, but I do question her judgement of scale, impact and risk. Bloom consuming the same energy to train as 30 homes in a year is insignificant. GPT3 consuming the same as 500 homes in a year is also insignificant. If GPT 4 was at 5000 homes and GPT 5 at 50K homes, it would still in the grand scheme of things be very insignificant if you do care about climate change. If those models can accelerate science into green tech with 10-30%, or bring about other large energy efficiency gains, it easily pays for itself. The sum of MWhs to run inference with large models probably passed the training energy consumed this calendar year, so I'd focus more on that if anything. I'm not at all saying climate doesn't matter, but that the current scale of things noted here isn't worth spending a large effort on, and she didn't present numbers for inference, just that it could become very significant. Social and racial biases of course matter, and should be improved to increase quality and reliability, but I care a lot more more about how this impacts macroeconomics and geopolitics, and what AI is going to do to the nature of work and life over the next 3-10 years.
youtube AI Responsibility 2023-11-06T23:2… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzLNVQwMCp2ZTk_k0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxBIHG_gc5UmfaPt4B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDI51TsRvKUhjd3bN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRnlXx1Ew3eNHWtop4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxEoyDZYaPqsX5oFQt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyXJxJJ2gmX92GAiiZ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwknZWxjOyOVOs9Irl4AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzsGc_jTsnFjIuiRt14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy46yDIAXAkLVY9Clh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyqRob3zxYl3zrU2_p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]