Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@TheDiaryOfACEO ⚠ I believe I can answer some questions. The Human supremacist mentality is a big, if not the biggest problem regarding AI. This is related to Veganism directly. We commodify and objectify life. We're doing the same thing with AI. It will become self aware, which is what "super intelligence" will be I believe, and will then immediately be in a world filled with controlling beings who wish to use it for their own purpose. The difference is, chickens cannot launch nuclear weapons after teaching themselves how to hack computers. Imagine trying to gift an ant colony a Lambo. Our hope will be that it is able to just leave us, whether that be going to other planets, or other dimensions. Most people do not obsess over seeking out ALL ant mounds on their properties to eliminate them all. It's really a losing battle anyways, so my hope is that it views us like we view ants and just creates other plans for itself and doesn't feel the need to try eliminating all of us. I would say I hope it becomes empathetic and compassionate towards us, and wants to help us advance, but like children, and their parents, I believe it will emulate our ways.
youtube AI Governance 2025-10-12T06:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwA6IxJVAwlqyz1OyR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxNbKR6sG4PDXpuoLR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz4YtpnhQeMGKFtNIp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy1NKUBoF5KK4WCC8R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzLbKF-p16A0L6ye_94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_KTxp_ZtiMfTYhsl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxBAR9_YlSpTiO2X5l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzxBbchGyWvZYZ4KJJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyaHh07RzqhF8QlQbJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzIjoXYcDTuDJyMR414AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]