Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI assisted comment: grammer & properly worded 😮😂 👇
It's probably already happ…
ytc_UgzRHEIFW…
G
The problem with this, stupid CEOs and managers who don't understand AI will not…
ytc_Ugzu8cCLR…
G
AI is not a source of truth.. it has no intelligence... it only provides what i…
ytc_Ugz8iqQun…
G
My thoughts as a prolific user of AI....
It is most likely a certainty that AI w…
ytc_Ugx-B69dG…
G
Most of the artists I know, and myself, have ADHD. Executive dysfunction is a bi…
ytc_UgwL7n-Sq…
G
Tesla clearly mentions that its supervised Autopilot and not self driving? Autop…
ytc_UgyUd_6TJ…
G
you're alright i hate ai in art,
"it can draw the best but will never feel the s…
ytc_UgzbdGoeM…
G
What you show me of Angel Engine shows clear signs of directing and editing. The…
ytc_UgwyMNTxd…
Comment
It seems that the framework of the solution is easy.
We set up many AI pairs that are designed to act badly (in a simulation), and with the other AI watching and attempting to predict bad actions. They will both get better... use them both to help design measures to stop a breakout by looking for early signs of growth in the wrong direction.
Look for early signs of an AI's attempts to investigate places that it could use to break out.
When you find early signs of problems apply training that reinforces preferred behavior patterns.
Regular training that moderates their behavior toward a norm. Based on feedback from observers that have a goal to keep the AI contained.
youtube
AI Moral Status
2023-08-25T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwviVWNo4VSsADOgrN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyfKyHhZM3QVMDwDzZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKSe3m-7-aXilb5Uh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwym_-WI7mM9mzp8294AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOyG0yPCz2DX5Npy54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQ8APOlvSug49V9ZJ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzOuOpAioCL3g4D3Bd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0NSzYdbOunFS_DEB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwo5xG5jauU-DUfmwd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxzgYl2-Q_0qXP8VJd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]