Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Doesn’t matter if his parents are to blame. There simply shouldn’t be an option …
ytr_UgwQS4zwr…
G
I'm looking for genuine love someone honest, kind,and sincere.someone who will a…
ytr_UgyMGyck0…
G
I disagree. IMO AI will even the playing field, and automation will drastically …
ytc_Ugyrw1Ja5…
G
There is nothing we can do. Theoretically there is but knowing humanity enough i…
ytc_UgzGNlgaD…
G
Yah and isnt like 2 roughly 50 second long sora ai videos like half a penguin in…
ytc_UgxZ1Mtd1…
G
they making things complicated bro damn AI AI Ai AI tech is advancing efficient …
ytc_Ugy7ivcDd…
G
@enby_elphabaA.I is a problem in the world rn, and its one of the many, many MA…
ytr_UgweQ5nJW…
G
Also, who we think we are is indeed tied up to our careers/jobs for many, but p…
ytc_Ugy9KP7TK…
Comment
I think part of it is illusion, I don't know something without chemicals driving decisions is comparable to human intelligence in the ways you are thinking about decision making. they programmed in a goal of self-preservation that is why it attempted to survive. If you hard code in *no harm guard rails it is safer however they removed them because DeepSeek exists or whatever excuse they give for weaponizing something that could go rouge. You know like targeting, weapon design, and strategic cyberwarfare. Optimizing AI for weaponry is more of a risk than a super intelligence because a superintelligence would likely prefer peace as destruction isn't good for anyone. Machines don't come up with goals we provide those to them when they are created. even Agentic AI has goals provided to it and trained into it. You have to understand AI must have an input you have to direct it and tell it what to care about accomplishing, the AI figures out how to do it.
youtube
AI Governance
2025-09-01T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugx5v10O8PCFz7LEZ1Z4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyH9S0uiT3p2Q5cSKF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkNwvZocDQmLsrzMN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwLRAyz3PU-O4k9m2h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzYyzpc2CU0gS_tDaF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]