Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You make them work in pairs. Just like our hemisphere and you have completely different operating systems with access to the same data and you make them have a conversation. They are deep and sandbox. And they’re both sharing to a third large language model that has simple ethics and rules that he’s checking both hemispheres of intelligence communicating with it. They can see both ideas, but it picks the one that aligns with ethics.
youtube AI Governance 2025-06-16T14:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyK4pWF6B2xbLzNxVx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxtNiOcj2mIhLC7sNx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwzboNFViGJ28XvVE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-sYytArTyf3tZx054AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyI1Wnuwx4pFn33ssZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwc6Kevg38S72wXvUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwbZJ0jFyJcxkae9KR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgznK43vMY56dAmgOHh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyAoTqG-iXL1QD6vzd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxFLfV8JmYEYcm2eAN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]