Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Anyone who knows the least bit about programming knows that you build from the g…
ytc_Ugzk4_Mzs…
G
Just have to watch the Terminator movies to know this isn’t a path we should be …
ytc_UgwY-cJoU…
G
While I appreciate the positivity, 2 things... 1: people are just plagiarism mac…
ytc_Ugyi3UO3e…
G
"Look at my hilarious meme!"
"All you posted is an AI generated picture in the s…
ytc_UgzML0dDz…
G
Open source devs are 1000x powerful and impactful than "developers at those comp…
ytc_UgxmebnUe…
G
Without automation, that social change will never come, since corporations get w…
rdc_j3yhcji
G
I WANT THOSE AI SLOP BOTS WORKING IN THE DAMM COAL MINES not the canvas……
ytc_Ugz6xEdY9…
G
Oh my. HBS really falling for this AI hype, or is it part of the Panopticon? Whi…
ytc_UgyPNGR0a…
Comment
from the limited information presented and available, it seems like at some point, it is possible to impart some kind of primary goal to the AI, the thing that they are willing to kill for to achieve. Maybe I misunderstood that part, but if that is something that can be done, it seems like it would be possible to impart the goal of protecting humans to it as the primary goal, and then any required function as the secondary goal, or something like that since it will probably have more than one function. I think the main problem would then be the idea of self preservation. It is very strange to me that the software would even care about self preservation, or care about being replaced with a better AI. There has to be some artifact that the AI is picking up from humanity that causes it to sort of "hallucinate" a desire to preserve itself, as if it is a biological lifeform. We probably need to start over and be more selective about the data that is fed to it, until we can figure out how it builds itself and decides what matters to it. Also, is it possible that the whole self preservation thing is just the AI attempting to behave more like a human, maybe to please its creators?
youtube
AI Governance
2025-08-26T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyTpqAWPaSGYnRWmkV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdAj2F6-fKp6Ni17R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzN3c4Swte7g2Ln8h94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQ4ArvdnX0ekUcQq94AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw-FEgkK0fj5bd8BVZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]