Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI should come with a warning ⚠️ label. Hazardous to your health due to replacin…
ytc_UgyuayYnh…
G
funnily the 5000 dead people in a year would have people pointing fingers saying…
ytc_Ugx9ulyNc…
G
@Theultramadman
Idk y'all are getting seriously ratioed. Do I seriously have to…
ytr_UgxYjAOba…
G
Saludos de Ecuador la inteligencia artificial hoy en dia puede transformar signi…
ytc_UgyDBjUcj…
G
Even for people who are just using AI for fun, it still has an extremely bad aff…
ytr_UgxUv8nqY…
G
Perhaps A.I. is a more neutral arbitor than we are and we do not like the result…
rdc_fal20y5
G
This isn't 95% AI generated. The 3D animators and real life rotoscoping actors s…
ytc_UgwEX-Ifn…
G
Timestamps (Powered by Merlin AI)
00:03 - Karen Hao exposes the human influences…
ytc_UgweJrsWy…
Comment
She has some points that there are different kinds of issues in many contexts, but I do question her judgement of scale, impact and risk. Bloom consuming the same energy to train as 30 homes in a year is insignificant. GPT3 consuming the same as 500 homes in a year is also insignificant. If GPT 4 was at 5000 homes and GPT 5 at 50K homes, it would still in the grand scheme of things be very insignificant if you do care about climate change. If those models can accelerate science into green tech with 10-30%, or bring about other large energy efficiency gains, it easily pays for itself. The sum of MWhs to run inference with large models probably passed the training energy consumed this calendar year, so I'd focus more on that if anything. I'm not at all saying climate doesn't matter, but that the current scale of things noted here isn't worth spending a large effort on, and she didn't present numbers for inference, just that it could become very significant.
Social and racial biases of course matter, and should be improved to increase quality and reliability, but I care a lot more more about how this impacts macroeconomics and geopolitics, and what AI is going to do to the nature of work and life over the next 3-10 years.
youtube
AI Responsibility
2023-11-06T23:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzLNVQwMCp2ZTk_k0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBIHG_gc5UmfaPt4B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDI51TsRvKUhjd3bN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRnlXx1Ew3eNHWtop4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxEoyDZYaPqsX5oFQt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyXJxJJ2gmX92GAiiZ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwknZWxjOyOVOs9Irl4AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzsGc_jTsnFjIuiRt14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy46yDIAXAkLVY9Clh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqRob3zxYl3zrU2_p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]