Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean neural networks are inspired by human neurons, and digital neurons are even better becuase they are faster, information is transferred to the whole system quickly unlike humans have to communicate to tell what they discovered. And I know it took millions of years for human brain to reach this level but the Machine Learning is very fast, though it takes many resources to train and powerful chips but how is AI not able to reach that level provided its superiority to bilogical neurons? I think I have not studies and researched about evolution or neural netwroks but as far as I know this means AI will become AGI by 2050 as per future projections. But When AGI comes what will we be doing? Farming? Or AGI does it for us? We can't really predict the decisions of a smarter creature, Right? Or at least not complex and long-term ones? People also say that whenever new tech comes like when computers came, people fear but eventually old jobs vanishes and new ones are created but but but, AI is not the same thing. Computers or anyother tech needs human input and needs to be directed heavily, they were just for speed and efficiency and AI is literally brain. This was all scientific things I could tell and beyond this I can only think of phiosophy rn like consiousness and stuff.
youtube AI Jobs 2026-03-29T07:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyQsT1l5umgYG15aKh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgxANGBM771CV4mOgQ94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzJVjDK0PfFJ56vr094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_Ugx5dAO_84qs1nezw6R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgyBZgE0DwxMp6iCVy94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},{"id":"ytc_UgzH3WOYOHac93d7_th4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgwFCFB2042RCAspUqt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},{"id":"ytc_Ugyq8-QlWJid2YELe1t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugxve9Ei9fyyj_UbSrt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgwwqgVV5q3uHemm7Yp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}]