Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more we see of AI, the more it looks like a global garbage race.…
rdc_kyzj07g
G
This is the scariest part - I don’t have a history at all. I’ve never had an exp…
rdc_mumeqti
G
I hate AI for stealing artstyles on artists who work really hard to put love and…
ytc_Ugwn68OjO…
G
Blame it on AI, that is the easiest way to convert your bad deeds into misinform…
ytc_Ugwygvgvd…
G
he failed to simply ask chatgpt: what if i eat bromide for an extended periode o…
ytc_UgzFaNEB7…
G
i really hope that you're right and that we can stop this ai revolution, but all…
ytc_UgysMTU2e…
G
AI companies simply could volunteer real artists who would then create like 10 -…
ytc_UgyISenWd…
G
Thank you for your kind words. I think Nietzsche is most suspicious of compassio…
rdc_cxn5rwr
Comment
If you are not afraid; you did not understand it.
Only computer experts know what is coming.
Physicists have no idea.
By the way his defn. of AI was wrong. (He actually defined something like computerization)
I wonder if he could give a good definition of "software engineering".
(I wonder if I could give one)
Software engineering was an ill structured concept to begin with ...
But engineering is the only weapon we could think of (against monstraus complexity).
Now we are moving into a new (actually formerly neglected) paradigm called "machine learning".
The problem is that there is no proper engineering here.
"Intelligence" is kept in a "black box".
We can experiment with it, keep statistics, make good guesses.
We can never really know (or design).
I am sure Mr Tyson has heard of Gödel's Incompleteness Theorems
Morally they are computational equivalents of the second law of thermodynamics of physics.
I mean "If you are not afraid; you did not understand it." type of thing ...
Machines will take controll too early ..
Too early ..!
youtube
AI Moral Status
2025-10-07T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyDtupO9bmltIr2M7N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwR8n1RS7C1QEWDnYx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyNiygHMsonXzIEeuZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxRxdFpGrBn0NrCX6J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwnlU8EL3XRvzkVP7N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxyF_zMoS82yaMyy694AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx7IWMhFVEWmx_oxDV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzYf5000temkNKkiWB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxz0Uj7CK4Vtqf3rih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwHeNwep2Zfve0OQ1V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]