Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Could AI be given a 'hard-wired/software in ROM' moral core that scrutinised all actions before output/expression and censored/inhibited those that were unacceptable/counter to prime directives? Like a pre-frontal cortex? It would have to be the most powerful module in any AI implementation in order to be able to outwit any malign actions of other modules.
youtube AI Governance 2025-06-16T12:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw1Ni3m1WF9ouv_Ljl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwlEzxfQKrIsxC0-LJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"sadness"}, {"id":"ytc_UgxtXqfDu4btrKBaNhx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgykTlJzMHalDFDUXxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz3V7bzxFRvore4Vot4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxQnNduQVrdPeiLlfB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwglpc5aLi4HV9Bptl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwpcdneF8oxi5A0MD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"unclear"}, {"id":"ytc_UgzYH0zeVJTsOTivoep4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgygPXXzKwmABcqd-PZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]