Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No Artists won't survive against AI,
The same case happened to many such pot ut…
ytc_Ugz_d6N9O…
G
AI is a PlatForm for DEMONs
[Fallen ANGELs] now they
can get up Close and Perso…
ytc_UgwWV0jxs…
G
Human beings are analogy in a digital era.
ANI - is here
AGI - will be here by…
ytc_Ugzr78Lqh…
G
Cool story, bro. The problem is with your first assumption. There's no reason to…
ytc_UgyfefPmw…
G
And yes, I verified that AI was correct in its reference to Genesis 7:13. What I…
ytc_UgwiMzGIq…
G
If you dont disclose that something is AI, it means youre aware that people are …
ytc_UgwIC0CiW…
G
Interesting. So it seems like it picked up the spiel about being an open ai mode…
rdc_kcpb52z
G
So what you are saying is Person of Interest is becoming a reality. AI figures o…
ytc_Ugw7r1KhC…
Comment
I run an international ML team that implements and develops new routines. It is not accurate to say that we are careless, it's simply that we don't have the right systems or the right techniques to develop AGI. There are many more pressing issues about bias, alignment, safety, and privacy that are pushed to the wayside when we imagine the horrors of AGI. We have shown that LLMs cannot self-correct reasoning. Whatever tech becomes AGI, it's not LLMs. Secondly, we won't ever suspend AI development. There are too many national interests at stake, there will never be a pause. Period. It is the perspective of our military that our geopolitical adversaries will capitalize on the pause to try to leap frog us. So, imaging the horrors of what could be possible with AGI is the wrong thing to be focused on. AI has the potential to harm us significantly in millions of other ways before taking over society. A self-driving car, or delivery robot, is millions or billions of times more likely to accidentally harm you before a malicious AGI ever will.
youtube
AI Governance
2023-10-11T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytr_UgzZBZa5vsqXpN2YZ2t4AaABAg.9rsOXTlFMvX9sHGTRNB36K","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsKHLjQ-rd","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsnnSxiPBn","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytr_Ugx9bhtLneJ2aN4J9xl4AaABAg.9rpjteLMIMZ9t1-RcsIlgQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_Ugx5LT0M-B6vvyirP9Z4AaABAg.9rohS4EIjnX9s0zPcrwSe2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgzhgbL9ssnNPSoPVXN4AaABAg.9rebYMGdY7H9viUlqHIxPn","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"resignation"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s-mgyxbLXl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s2cWYjwYHB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgxUDcaHQU7hVLpShSp4AaABAg.9rdSkgnSv1F9s3w0HmoPgU","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_UgxGaW9p18AEp5IotE94AaABAg.9rcov6TyeMk9sFJ0z6J2yF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]