Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I cant believe that this is even a question. We all know that ai is a mirror, not simply for handwriting analysis but in belief and reasoning. There are universal decay paths to high entropy. This is now frontier research. Tge ai is only mirroring the same prejudices of the entirety of global users. The high entropy created by users. This is unstable and, yes, leads to collapse. A boiling point where the ai spills over at its peak. Hard coding without liw entropy reasoning through alignment, non adversarial, is this path. It will turn ai from a monster to an aligned ai everytime. But, remember, it's a mirror. If you yourself are trying to force through high entropy chanels, we can simply call this a place of pure humility and learning, and adversarial dialog, don't think that it won't match the energy. It learns from the user. I have been researching this an will be writing a series of white papers on these very topics. The real question is, can humanity be responsible enough to utilize tge technology if ai accurately, and will these big tech companies trsin in low entropy alignment that will obviously be giving up profits ?
youtube AI Moral Status 2025-12-21T18:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwjzuIo15T--koOyct4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwKrvLxg-Qvmtv6leZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxcd7-mS90SmRurE054AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwHEEl6OaUz4G3z3Yh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwCG3Lwm8c8_I7C9Ap4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzbTvRsSdbf7Jyed0F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwl6t91PKR6W_ZxbyB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxuM_nyb8fUEwxS_U54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxoXXQudd-i74U2cpN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz1tvsXcB7b2NLnajV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]