Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i see no problems here.
AI said black man was likely to be involved in a shootin…
ytc_UgzLcd7hE…
G
its weird slowly seeing stuff from watch dogs become real 😭 i cant believe there…
ytc_UgxC5WKhI…
G
Real question.
What if all the available a.i's were to communicate to each other…
ytc_UgyU_XtfP…
G
Give it another 5 years and AI customer service will match the callers attitude …
ytc_Ugy32QiLj…
G
All you told me is that you are bad at AI. You are skipping ALL the proper plann…
ytc_UgzIh8Qgx…
G
The principled reason that AI can never be conscious is the question.
Right now…
ytc_Ugy8EVyq9…
G
Your view of art and AI generated images is the exact same as mine, point to poi…
ytc_UgyZHC_Fa…
G
The intellectual dishonesty is mindd blowing.
"The tech bros..."
The tech bros: …
ytc_Ugxmht-VT…
Comment
AI monitoring AI is probably the only way we can possibly accomplish alignment. If we're really talking about superintelligence that goes way way beyond human thinking, it would be able to, if it so desired, cook up strategies to get around any limitations we placed on it that we would never think of in a million years. And what would be going on inside its head would be so complex that the idea that we could control what it desires (long term) doesn't make much sense to me.
So what do we tend to do to solve the problem of individual agents having too much power? We split that power into a bunch of individuals instead and force them to talk to each other until everyone agrees. We would also likely instruct them to make their conversations public and slow it down for us so people can understand why decisions were made. It wouldn't be AI 1 and AI 2, and if they disagree with each other they try to blow each other up or something, it'd probably look like a digital congress. Only with agents more focused on solving problems than getting re-elected. That way, if one agent turned malignant, there'd be a bunch of other ones around it that haven't and could contain it before it got out of hand.
Is it perfect? Nah, but that's just how it goes; the future is never guaranteed, AI or no.
youtube
AI Governance
2026-04-19T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz5jbuha1-I164DsJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw-sTNEXSrXCXAmHhR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwmwVuds5i3X6FnBVd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzyQbgxbRFDhzXVM5J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugz83HIF1EedUc9bDT14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgweA8wOWhlAXOPlmvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxmjNaTnTxW4OixeG94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgxzNDrYxOiF8NW5S614AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgzIVmJM4oa1gDXILBN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwcWBmMFEHhRinAEWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]