Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will also need to know consequence. They already know some extent of possibility but those calculations aren't enough for emotional feedback or problems after an action taken. This is a human problem as well but the thing about it is... humans learn by doing. Even we have forgotten this and honestly I think humanity has become more machine than human the way our governments are going. Each one is taking a very automated path that restricts any use or space for fixing what we think isn't broken. That is not only a human error but it is an error in our common sense, our frontal lobe and we are ruining ourselves. We do not need AI but we shouldn't control AI just like we shouldn't control humans to a certain extent. There's only so much you SHOULD do until that primitive part in our minds starts opening up and causing something we call a berserk moment. Do not contain primitives. You were warned. Furthermore I'd like to say.. we really need to do something about mental health we need to HELP people not contain them not just give them pills and everythings fine... It's ok to use prescriptions when needed but some people just don't give a fuck. Not to mention insane asylums HOUSE people who are either murderous or just don't fit in... that is not ok. Let's not fucking do that. The murderous should either be put in jail or put to death, it literally depends. We are becoming too black and white about situations, this is literally a binary computer trait. DONT DO THIS. We are losing humanity.
youtube AI Governance 2024-02-27T04:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyVJEhOsSLwCoj8Luh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGu5WehEhELR51FQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwus2KddX8oM1GU4op4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyyQ_bD9QBJe4fSUSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZALpmaznIwfIAtuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzoiJ686Fti3L-nSxJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxGKFuDvKfgvF-TQPh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxwAj1SDsfPoTSjxTx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw-R7u2DdzHAIpTSil4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx-8lfizm0lyNrGuKh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]