Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Believing that super-smart AI like humans will arrive between 2027 and 2030 just because computers keep getting much faster ignores some huge real-world problems. We need way more than just speed. There are big roadblocks like needing totally new ideas for how AI actually thinks, getting enough good information for it to learn from, making computer chips much more efficient (not just faster), dealing with the massive amount of electricity AI uses, and figuring out the heavy water use needed to cool the computers. All these major issues have to be solved before we even have a shot at creating this kind of AI, which makes getting it done so soon very doubtful. Also, the AI that's popular now (like ChatGPT) is mostly just really good at finding patterns in huge amounts of data. Other kinds of AI research are happening too. And even when today's AI seems like it's reasoning, it's not truly thinking or understanding things the way a person does.
youtube AI Moral Status 2025-04-27T13:1… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxwHvcRxZ1uMgkEjfl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzsLemhJ8IWYet3mPh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzqWEsvuRzl2B7M0Md4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxNd9OTtZVaEyZijwF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgylubE-3kYPpc6iLEZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyzGwDAkFVaTnL5h-l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxk9U99eMvKsjNNlhh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwEs8bW-vVDwY78FFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgykyjvurmDaoihK5c54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugz0OyaE4rfSFkInbvV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]