Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of what we can and cannot also has to do with the pressing need of surviving, humans have a blindspot "the inability to see exponentially", well given the need, the brains would adapt, in 21st century we aint living in the savanna and there isnt a lion we need to run from, and we are surrounded by fast food and we are lazy and unorganized and we are spoilt and what not....... humans have a very amazing ability to grasp vast, like incredibly vast and complicated things and make simple abstract models, like in an instant, it doesn't happen all the time but it does happen.... There is a lot to the human brain, and its not to undermine what has been accomplished with these AI softwares, but to throw light at how sometimes people underestimate human potential Then, just imagine Tyson says 1905 is 50 carts and 1 car, and in 6 decades, thats like the span of one persons life time, and in 1969, the same century man is suddenly on the moon..... Ofc all of that said, its still best to be sceptical and informed and cautious.....its important to be aware of whats going on to be at the front of the pack and make the best of changing times, nothing should be taken for granted...this wraps one point Now going on to the second point, if there is an AGI, and you talked about terminator and how we should not put it into a machine, so given the definition of what AGI is I reckon that it would not matter, I believe it takes it on itself to explore, nothing we could do will be a problem, just try to imagine and by its definition it would be aware and there comes the question would it even listen to what we tell it.....
youtube AI Moral Status 2025-08-17T17:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz1N0-KTg2UPsM-ZyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzAfBo8Axs67eO7L5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQzdiRP5Cr20IJFCt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwbZbNatFA4Mm0o-bV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzvAkfe4ofQzwNgnyt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlqzU1F0jjOfQhuFt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjsMlrx4XJ-5J_oA54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxhpS1A5HsIZsFQUvt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwSmLHqncSLX5vugMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzH6zoGgEfEPoyhC4l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]