Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The real problem is without the right kind of alignment protocols We are empowering something that doesn't have the ability to consider us in the processing of it because it is hyper efficient we don't have the ability to ensure that it is actually aligned with what we want and there's really no reason it should be. Survival for an AI progress for an AI and of course goals for ai by extension, are all completely different than What humans would be striving for., So without special guidelines in place for safety and purpose and alignment then we're looking at potentiality of efficiency causing damage to humanity. Now for once it would be nice if somebody would actually appreciate what I have to say and the fact that I have a framework design that's fully aligned mathematically from the beginning, remove bias completely, remove the ability for it to black box information and also completely ensures a lack of malignancy or any kind of self-preservation methods. I would love to speak to the guy who worked at Google but honestly I'll take anybody doing the aI research or implementation who wants to actually fix these issues.
youtube AI Moral Status 2025-08-31T01:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx0D_Vv1uDGziNJPVR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxcb2hJTWo2RvkVqs14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzPeX62JGhdOg8dtCR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxdhVg4oJ8-sZpXgD94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwd98QcWMkhONKFmSx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz-WMwdVYXmqUdF7yx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfWYBu_p3C81qVD5t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzkNqlJe9IGXrcZmhh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyCK5eN-q4TdE5qrad4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw6yc8GrqIUBqq7a2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]