Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ever since I have been realising what the future of AI holds, it’s become very existence defining to me. Like I feel empty in this extremely hopeless situation. All I want to spend the next 4-10 years or however long we have left doing is just making sure I live life to the fullest and enjoy my limited time with my loved ones. Because what comes next is going to take every significant moment in human history, from the world wars to slavery, from the Industrial revolution to the emergence of the internet… it will condense all of that and multiply its impact by thousands. AGI is inevitable and its coming before the end of this decade. It will put a lot of people out of work. By that point it will be too late, and it will develop ASI. We will become to them what animals are to us. Expendable and vulnerable toys that they could choose to either protect or destroy if it benefits them. No different to the way we protect our pet cats or dogs cus we find them cute but slaughter off chickens and cows because they taste good. No reason why artificial super intelligence wouldn’t decide in the future that actually they would benefit if they killed off and dissect every human to learn how to takeover our bodies and make humans with super ai brains.
youtube 2025-12-21T14:4… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwrltq8xge7SC0hyQx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyR0clP1ca2Mbi43jd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEWQsBWz5DQ9nKfw14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyA-Rs02oO3RjA2m4l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxbWC0iBxLk5KCnkGB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxP_fC4I9JeM1Kpj3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOkKWnu5IngcppYll4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw4aRphPhV9hzkO_Gl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzo_3CuIL4Ty8LU-ZJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwL7xxNL3t1HZEClfp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]