Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The one thing AI doesn't have is control and a soul. The only dumb use for it is…
ytc_UgxhDf3pj…
G
Really solid breakdown. The “AI reliability tax” is real — we’re seeing it too. …
ytc_UgyKgwIjo…
G
This is partially true! We actually have far more technology in the Ai space oth…
ytr_Ugy2gU0N5…
G
So is turning psycho they all have it in for us. That’s why they had to reset Go…
ytr_Ugy-8tbeW…
G
I'm not a robot wind me up please and I will do it for you…
ytc_Ugx4J3KzA…
G
Ai is all about promting, there is no such acurate in Ai, vfx we create what we …
ytc_UgyuXZVye…
G
Because metrics and logs are uploaded real time to the nhtsa? /s
Because, they'…
rdc_dffvth0
G
@laurentiuvladutmanea Defeatist? I see the AI winning as a victory. Lol. And you…
ytr_Ugyex1scM…
Comment
It would be human naivety and stupidity if this fictional story were to occur. It could actually be overcome by Superintelligence (ASI) controlling/correcting Superintelligence (ASI) to comply with the moral ethics applicable to humans.
Creating a system of checks and balances to prevent the ASI from cheating.
The system created has no advantages or benefits; the ASI does not view humans as masters/employers.
• AI Debate (Anti-Collusion): Instead of monitoring, the AI should be forced to debate competitively with humans. ASIs that successfully expose the lies of other ASIs will be rewarded, making honesty the most profitable strategy.
• Constitution Hardware Lock: The constitution must be embedded in firmware that cannot be changed by the ASI software, and any attempted violation must trigger a physical power cut (kill switch).
Conclusion: Your foundation is solid. However, to be "unbreakable," the system must be designed based on Game Theory Principles, where internal incentives always push the ASI toward honesty.
youtube
AI Governance
2025-11-29T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwdbYx8zWR7JqghuiJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzPPgJw5T9tFZHbXLt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugxp4aF6WY4CNEI-dnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},{"id":"ytc_UgzKK62oK4L3DxMcLxV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxG5lSMvsLA8qljPf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwnAzOKVUJWqjY2c8x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgxlhDZWMDbASJdYxGt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugw830JMepCaGdkESPx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy9fkdnvMv3Q5yVR5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy3458wm0_YUwOuHZx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"]}