Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will never do this. AI will never do that. Then it does.
Human hubris, norma…
ytc_UgzJ22MyN…
G
These people are the ones that'll be subscribing to alpha test runs of The Matri…
ytc_UgxhQN9De…
G
AI will destroy humanity asap first chance it gets! We ourselves will have instr…
ytc_Ugy-Slwar…
G
@nevaehlynch3399 Hey easy there! Sure they may be lazy but playing around with …
ytr_Ugz35Pt5g…
G
I'm just gonna drop my opinion here because I want to know what others think:
T…
ytc_Ugwd_esQ-…
G
The only thing that can still stop AI is German bureaucracy. It’s our last and o…
ytc_Ugxz68Kkw…
G
well since e treat my son like I want because I'm the one who created him and I …
ytc_UgjPcgIY5…
G
I think it's important to appreciate that AI will lead to concentration of wealt…
ytc_UgyQ523Ye…
Comment
It's not like there's zero hope, but rather than trying to slam the breaks, I strongly believe the best option is to pivot. Currently, AI is limited by things like Reinforcement Learning and many using a single agent architecture. Genetic Algorithm development and multiple agent architecture allows for development of agents that on top of allowing for more robust and dynamic skill sets and greater speed and efficiency, rather than being forcefully aligned to human ethics, the selection process can naturally encourage developing ethical parameters towards our own.
That being said, as long as RL is the standard it's kind of a weird situation. RL naturally produces exploitable behaviors and hallucinations, which are both a problem and a way to disrupt them in any case they begin to become dangerous. But as the industry looks to eliminate those problems it may be eliminating that Achilles heel. Especially given the recent development of a process to isolate neurons responsible for those behaviors, this can be concerning.
youtube
AI Governance
2026-03-18T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwwCSAkZD6PLL_kNDx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXlSXp10nsT9IHQiR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7kwwZ--k22ZRM6714AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9jD2nhnwe2mhixM14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxopntF3ItFRmx8fbR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxI-xcsC7ZgI5JnQWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7gZIqX-nflGOgd154AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBpCo47hMm5raOWO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxiYGS2fEInRDCA8fd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxBHivph5ar8jQWKg94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]