Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a software engineer, I won't even get into the technical aspects of this whole thing because there are a lot of factors glossed over. But I think there's value in exploring the philosophical side of it. If you think about it, all the data, rules, and modeling of AI come from us - our own data - it does not appear from nothing. In that sense, AI's "personality" is a mirror of us as humans. It simply becomes a more efficient, non-emotional version of our own best and worst qualities, and everything in between. We are the datasource of it. So here we sit, fearing the unknown implications of our own proclivities and moral systems enacted by something we ourselves built... and in spite of the intelligence needed for us to create such a thing, we're still so ignorant of the fact that our fears are aimed in the wrong direction. It is our own cruelty and hatred we should fear. AI is not the problem, it's only a mirror we're unwilling to look at. We think of "morality" as a differentiator between us and programmed entities, but we are not more than programs ourselves in a sense, the way we're unwittingly shaped by our environment and our interactions with the world. We sit here fearing our creations' ability to learn and become more powerful than ourselves but if we're actually honest with ourselves and understand that AI did not spawn from nothing, there is an inescapable conclusion that it is a result of us. It is us. We should not fear AI. We should fear ourselves.
youtube AI Governance 2023-07-07T05:0… ♥ 73
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx4b4D3q-SyQ2TBjN14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyyuRjiTeYdzXNjo7Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz695oAUakVpSfEBjR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzLP2G9glNjxHxe_iR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw9ND2GzdIniEiePbV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyAE6wvyDrBC8RGR4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyWLEcWGdv3NinU0R54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQt0gKlsAjpm8eJ6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy_7Y_unDFwBajGenB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw1Jo_yXoyL4eq0IQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"})