Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would assume for low tier tasks this might actually happen quite soon. Or they…
ytr_UgyTcWYS8…
G
Assuming that the robot was meant to distinguish shapes, as opposed to being pro…
ytc_Ugxed9pcP…
G
with the my ai """art""" is cutting edge
yeah it made me cut, i can no longer ed…
ytc_UgyMYttjL…
G
Until Star Trek Data capable androids are commonplace, technicians doing the job…
ytc_Ugyf2dC8-…
G
The problem with Geoffrey Hinton is that he doesn't have the faintest clue of wh…
ytc_UgxTYqN6A…
G
100% agree
I own a business and work in tech services & do consultancy
the only …
ytr_Ugwl59KFG…
G
Thank you. I remember most people on reddit raising pitch forks when they start …
rdc_gvbtj9r
G
No, it’s that states can’t do that, but the federal government still can, this i…
ytc_UgylzhoXo…
Comment
Given that AI is fundamentally linear (1s and 0s) but very complicated, and humans are complex, I find it incredibly improbable that AI can EVER be 'aligned' with humans.
How do you align one dimension (linearity) to multiple dimensions (complexity)?
Good luck with that.
We could however potentially align our USE of AI with human life.
In other words, "alignment" is not a technology problem that requires more intelligence, but a governance problem that requires more wisdom.
Sadly our culture has that in very short supply. Apparently we don't even have enough of it (wisdom) to recognise that the AI we already have is rocket fuel to an already f#%^@d civilisational game / social contract.
The AI we have now is NOT fine at all.
Jumping out of a plane without a parachute doesn't kill you. Hitting the ground does.
youtube
AI Governance
2025-08-26T00:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugx-qzznYwo1reEFsad4AaABAg.AMA8-Pk0B_PAMSIEx5jYF7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugyalir_2NorClRhmXx4AaABAg.AMA2yQD-zr-AMAN33w51l2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugyalir_2NorClRhmXx4AaABAg.AMA2yQD-zr-AMCC2_IYVmC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugyalir_2NorClRhmXx4AaABAg.AMA2yQD-zr-AMCR6jqsD9G","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugyalir_2NorClRhmXx4AaABAg.AMA2yQD-zr-AMCWnuuATiY","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz5jNs56uezLOZZwbV4AaABAg.AMA1IuTRIZAAMJNat_ztFI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzOrZwZQyjwQTMXmBh4AaABAg.AMA1-19-Qz_AMGbw0V_JDN","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugwim9XQC9rU_cnMzhN4AaABAg.AM9y2mqUfvFAMB9yJeDbRx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyO6Ytj4-Ipljm9bO54AaABAg.AM9q5Q9W9e7AMSKFocRrSv","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyO6Ytj4-Ipljm9bO54AaABAg.AM9q5Q9W9e7AN3QkA82btd","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]