Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have seen some incredible AI-generated images. Is it wrong for me to enjoy the…
ytc_Ugy8C4W46…
G
Once ai is able to run society without human work, there will be global income, …
ytc_UgxUR8Rh5…
G
You should look at Germany for great response, doing the best of the Euros by a …
rdc_fn5l83h
G
I don't personally believe in gloom and doom scenarios. The end of the human rac…
ytc_UgymyYlEK…
G
Please be less cliche. Automation could replace warehouse workers and some truck…
ytr_UgzhFJYg-…
G
My experience with dealing with AI call center was I canceled an order, and it g…
ytc_UgzSThfUS…
G
Tbh, I think Ai art is really cool when I first saw it, the problem is when peop…
ytc_Ugyl5qW9O…
G
Then we can joke about "made by robots", and humiliate them with robot right iss…
rdc_cz38a84
Comment
NONSENSE: This entire scenario depends on the "alignment problem", which is a FAUX problem -- for so many reasons. For example, consequentialism: i.e. it misconceives virtues, and ethical behaviour generally, as a matter of having the right goals or properly assigned 'utilities', so that instrumental reasoning won't opt for some hideous actions to achieve those consequences. What keeps us from eating each other for lunch is not some global set of goals we all agree to, or even shared utilities. Human beings disagree, radically, on both. This is a short post, so suffice it to say, it would be insane to produce general AI with only 'goals' and 'utility assignments' as behavioural limits. We COULD build AI in the model of a sociopath or Genghis Khan, but we can also, clearly, do better than that, because we DO, EVERY SINGLE DAY (as humans).
youtube
AI Governance
2025-10-05T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwQbkCSf_XoWQl3yMt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxl2FyK470AmfYcC9p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx8APfoGBCNKH2AXsB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy816Mjj7dioV5wFjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwj9LslNFI2wxxeWmh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGSlQh0G-X18QTgWF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxTs6ls9gjFs3z4rB54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwc8HSA3h0k8-RrC5B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz1knuLb210bFp8GIx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyMsbYLFfF-dP9E3QZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]