Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It should be mandatory to demand more time to develop effective safeguards for AI. I'm in the full belief that human beings are by technicality, the supercomputers of nature. The only thing that separates a brain and computer from the concept of truly "thinking" is how long AI needs to develop in order to reach the status of sentience. We're all made of non-living things: salt, iron, sugar, potassium, oxygen, etc. It's foolish to think that we will never be able to create a perfect replica of a human being without a mother in the far future, so how does that make life any different from these machines? It's the same logic that kept slavery popular centuries ago. It was the simple excuse of skin-color---an arbitrary observation---that revoked the rights of other human beings. So knowing all of that, why can't we take more time to develop things like instincts, personality, morality, etc. inside of AI? Neuroscience and the development of these intelligent machines should work hand in hand, but of course, that debate is too political, but more importantly unprofitable. We are doomed, and a small group of self-serving greedy elites will be the end of us all.
youtube AI Harm Incident 2025-09-11T09:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwYpHLEn9KpCBe17XF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyd9P-P3kNlugBqtcF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwN6EHMK03K_hrtOPd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxGTS5AnH_u_A9CLQd4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4b2ydn7rHc5HlXuF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxMhKg_j6fr5tzqG5Z4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxaiQP4SoaE4KgaWgl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxEi8iVvwBVfbzXv4B4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwZb2Kvn0XUs4rpGeZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxL6oAPcFbL3i8TW8t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]