Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The so called driver shortage is made up of thin air as a way to justify and pul…
ytc_Ugyd9jSoP…
G
Exactly! The image isnt the art, it's the effort, creativity, and soul put into …
ytr_Ugx8Qt5Ry…
G
The Robot can only do what its programed to do nothing more. She was programed t…
ytc_Ugw5wCh3L…
G
Ai is not capable of generating something that doesn't exist. If you want it to …
ytr_Ugw3g6SSo…
G
Yuval is not a Computer Scientist. I don't trust anyone on AI that doesn't have …
ytc_Ugx1wGTjC…
G
very good points! although i thought ai companies specifically told us NOT to sa…
ytc_UgziI07s1…
G
This!
AI is really applied mathematics. Fuck the coders you need those math gee…
rdc_k8t36yz
G
An any human who has comman sense an is smart an intelligent u know that ai is …
ytc_UgwSfYU_z…
Comment
Fascinating, if highly depressing discussion. I'd like to think humans will get together to solve the AI safety issue, but humans appear to be more divided now than they have done certainly in my lifetime, if not for many more years. Much as with climate change, we're now at a stage where humans won't even begin to act until it's far too late. Money and power comes first, and that will lead to a lot of death and destruction over the coming decades. Perhaps we really are in the final stages of human existence on earth, and maybe we will deserve our extinction.
On the other hand, 80+ years ago there was a race to develop the first weapons capable of destroying all life on the planet many times over, and we're still here. Only just - that threat is still looming, and with the way the world is right now, it wouldn't surprise me to see it happen. But we've held on for nearly a century with technology that would lead to mutually assured destruction, so it's possible that we might hold on for another century with incredibly dangerous AI, as long as it doesn't consider us a threat.
youtube
AI Governance
2025-06-18T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxupByq1pJU7KQwkDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwwXkNltFeimPmJMZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqBsep7so0OTfonf54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwfI1vnL8WvQKAFH-R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgziafO7OcSIak-2lXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy6rwnX5lC-hwWsLEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwFuc4NmZft8ezsNg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxlHbfZ_Kf0goYUIWp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxnLmiNozAbiWg42y94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiCIBcaMIxrDypsY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]