Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
nobody is thinking about this logically though human beings are not smart enough to train an AI to reach AGI, we dont even have the infrastructure to support all the data centers. AI consumes to much of everything, to much money, to much CPU's and to much GPU's , to much electricity etc, it is not sustainable to continue like they are, AI is going to implode, firstly we have not been alive as a species long enough to collect enough data for AI to be trained with, AI has already consumed all human data and is now trying to train itself, this cant work, there will be no AGI this way, secondly they must know this by now, so they are lying to everyone, they are planning to use AI for something different than what they are telling everyone. so this is just my opinion. unless they kill 200 million US citizens to free up electricity and water and CPU/GPUs AI will certainly fail very soon. like people dont understand AI burns CPU's and GPU's up very quickly they constantly need replaced AGI would need to forever be replacing it's CPU's and GPU's we are not even close , technically AI is not even smart, it just reads data and copies it to seem smart, so i would be more worried about what these billionaire morons are actually planning to do with it, probably kill 200 million people with it, i mean logically they cannot sustain 340 million human consumers with no jobs, our countries infrastructure cannot support 15 mins e-cities and there is not enough materials to make electric cars for everyone, so obviously their plan requires hundreds of millions of people to not exist anymore to work.
youtube AI Moral Status 2025-12-14T03:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw423sbNBGudEfDrAt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQNGXEZ3QcbYuBFlN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyY6pEi_PT6_Jc5jxJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz3FM6NDchXLm6H6Gh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyBWE0ProEolseBzXB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwkwFMC7KHvXWYrgcp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw7UoxDphSpBCwVCnV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyUoj9pkF_OmSIm6YF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwoNY3LZCNghG5n6PF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy6VdnRudy-RLQSt2V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]