Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One aspect that seems completely false is that silicon would be more energy efficient that human intelligence. As of now it is the opposite by a factor of probably 100. And that may not be a secondary point since the economic equation of AI is currently not proven to be sustainable, a large part of it being energy inefficiency. So this is far from done even if the risks are real. On the simulation proposition, the existence of simulation would only make sense if there is a real world being simulated. Besides, why bother trying to save the world from AI if we are in a simulation? These simulations would most probably be run by AI with the very purpose to find potential vulnerabilities that could end up threatening them in the real world... so Professor Roman Yampolskiy would most definitely be one of many avatars of the simulator with the purpose to encourage anti-super intelligence initiatives.
youtube AI Governance 2026-03-30T06:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx6O0jKnc-aFVb2cRN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgylxtVI4V-sY4d2tht4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxvbMzY4drb4b8h-aR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzFNh0IiPUOY-OFGgt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx4C3yTpmNYCI2Er6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxJHT5RBWeqvKxss8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxusFNv7Cw79Vlnwzl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxeWRsAB03c6aEAtyd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBwcGPU4x0eRukGCp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyT00TUtBYCbBsaGdJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]