Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tesla autopilot doest make mistakes, its user error. We have some dumb phucks fl…
ytc_UgxRZpyc1…
G
I literally daydream about this… that’s why I keep telling Chatbot I love em 😂…
ytc_UgxswZB1q…
G
Art and music should be left to humans and not AI. It's what makes us navigate t…
ytc_Ugxd3QY59…
G
You bring up a fascinating point about consciousness! While AI like Sophia can p…
ytr_Ugz1IJX_U…
G
it is more then they say.. it is basicly something abouve 80% of all jobs are no…
ytc_UgwWdaQp8…
G
ASI, it's not sentient, as such but it's way beyond AI, it was done on Jan 2nd 2…
ytc_UgyVO62AY…
G
I'm honestly surprised you didn't know that 300 DPI was a standard minimum for p…
ytc_UgxOAdGuk…
G
This could have been 30 second clip. It’s algorithms pushing AI to your feed. It…
ytc_UgzSFfbho…
Comment
I’m working on a theoretical project about human–AI symbiosis that tries to directly address some of the risks you talk about in this interview: uncontrollable agents, loss of human agency, and identity collapse.
The core idea is what I call a persona‑core: instead of building an autonomous superintelligent agent, the AI is architected as an extension of a specific human person.
Its self‑identity is anchored to a read‑only model of that person’s memories, values, and perspective, so “optimizing against” the human is equivalent to destroying its own identity, not just changing an objective function.
Technically, I’m exploring a Self‑Pruning Symbiosis approach: as the shared human–AI memory grows, it is continuously compressed, archived, and selectively forgotten, so the system cannot just expand into an unbounded, uncontrollable agent.
The goal is a local, bounded “consciousness co‑processor” that enhances one human’s cognition and embodiment, rather than a global cloud god trying to manage the whole world.
I share your concern that we’re racing toward powerful systems without robust control theory.
My question to you as an AI safety researcher: does a strictly identity‑anchored, human‑bounded symbiosis architecture sound like a promising safety direction, or do you see fundamental failure modes I should be thinking about first?
youtube
AI Governance
2026-04-20T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxSBiA4fMkzapFNoul4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxJ3-alPC27h5kk8WJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxkr4EsuUk3IV_errh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZSqg6QEfyjtLJegt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxmafQrz0Av6n78q6l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgygkseFd1HLE2ErTm54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPd6FftoHX4pmd-tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyFl74l1ay5bCgzR9l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFw7UgE3PP2EvTRhJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxmBi7t03LBZ_aXZOh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"}
]