Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm puzzled by lot of the AI experts (no deningration here, i respect them deeply). But it seems to me that they always neglect the energy consumption side of the problem in a world where we have less and less energy at our disposal (fossil fuels reserve are decreasing and Nuclear power plant need a lot of time to be built). I'm wondering why they forget how AI is so energy dependent. You need to spend millions in training models and running them on super computers where a human brain consumes only 20 W/day. Of course alphago beat Lee Sedol but with a consomption of energy that is immensely more than the 20w of his brain. Humans are way more energy efficient than computers. We can live without electricity, without problem, machines can't. Isn't it one of their biggest weakness ? Another question where i don't see much answers is how an AGI could terminate us without having a grasp on the physical world. Sure the AGI could kill us all by nuking the planet or engineering a virus that kills all human (anything that can be computer controlled), and then ? If it hasn't build and controlled enough robots how could it build itself in the physical world, how could it replicate ? how could it mine the resources, transformed them etc. up to making or extending itself? It would need an army of robots to do all this right ? All the resources used to build machines are controlled in the physical world that we dominate, no ? And if a factory was starting to build millions of robots it would show no ? Without saying that machines have a very bad tendency to break down (the amount of maintenance a machine needs to keep functionning is directly tied to its complexity) where human bodies are very good at auto repairing themselves. Humans can live very long consuming around 2500kcal/day (around 70 to 120w per day) My son's computer consumes 4 times that an hour just to play a AAA game...
youtube AI Governance 2025-09-23T15:4… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwxbed6DIuUQuLQ8q94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwWO6iXkgGIxrdK_vB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzrVmM45resZfuyf6t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy6dnVkW0Gy05oX4-F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwi7vBOag0E-mGwhg14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgznAhsPi1tRMFCIPrV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxkOhIX0rXWNClXGdN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwfcp_Xlm_KTGrcjf94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwOe5OBls5uM9yl6NJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxhkaBIBtc_IUiakLt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]