Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have used AI a crapload, trying all of them just to see how things are progres…
ytc_UgxVDR_xE…
G
I felt like I was watching a 70's episode of the bionic man for a second. Wow.…
ytc_UgwVUg0YL…
G
I am not a techbro by any means, so please don't assume that, but I do want to m…
ytc_Ugw79vQBV…
G
There's plenty to worry about NOW with LLMs. The poop already fell into the fan.…
ytr_UgymzLCIy…
G
You saying, AI despite being liberal and having a strong moral compass which der…
ytc_UgzGtjiCx…
G
But you have to contend with the fact that the vast majority of AI art *does* ha…
ytr_Ugyh5J_of…
G
He removes bias from AI... 😂 lol what a terd. In other words he removes truth fr…
ytc_UgyXVkEQn…
G
yeah, enslaving robots is the the first step towards a robot uprising and the en…
ytc_UgxdROzPJ…
Comment
You mainly describe Geoffrey Hintons AI sub goal dystopia. Fair point. There goes in this scenario the real development and more important self-improvement into dangerous directions.
I am also fascinated by the almost silent transition of classic AI-god narratives to LLM. At the other hand it's an available blueprint on how to behave and how to perceive these magic (Clarkes 3rd rule) things.
For vital processes in nuclear power plants, surgery, traffic, vaccine development/distribution and so on I'm actually not that worried. The efforts to make neuronal networks predictable enough + with rule-bases checks will be high enough, I guess.
The bigger danger I see in complex systems which are less clearly in a "closed shape", like political opinions and demands of billions of people. It's simply different than an engineer's work environment, and way harder to find non-dangerous AI intervention here.
youtube
AI Moral Status
2025-10-30T21:0…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzczosQYWlhu4gNJCl4AaABAg.AOv-s3FOmrWAOvlatiK76M","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugx3nSuDFDjpcBaDBdF4AaABAg.AOv-oK_sjhJAOv9g7gfeT1","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOv80RLGal7","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOv8HYMTXbX","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvA2T7sLay","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvAhqaz2Go","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzUrlFSrmKEOxF9n-N4AaABAg.AOv-_TI2mTXAOvDOnLnfqS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugy-H-lkhzRZ5AlKyL94AaABAg.AOv-O0chSbaAOwasIs7OId","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwmIejXdDmc3nz1Zy54AaABAg.AOv-G3_fRc9AOwR7xdoXeU","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwYSjrR-3YQGIB4WPl4AaABAg.AOv-FgYG3tmAOv3XHim0r5","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]