Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Agree - I totally understand the instructor's frustration, but this just lets it…
rdc_nu15fpu
G
You don’t get it - ai will learn to persuade and manipulate much faster and bett…
ytr_UgyVMNmZe…
G
Imma just point out that AI could have made those too if prompted to.. but in a …
ytc_UgyqEKECd…
G
This is exactly why Musk latched on to Felon47, he wanted to make it self drivin…
ytc_Ugw1FwFF1…
G
If people don't have jobs who's going to buy their products?
So they are actual…
ytc_Ugx5fyToa…
G
I am not sure of this conversations authenticity. You would not have to remind a…
ytc_UgwHFyCsc…
G
The NYT is so incredibly dissonest journalism. First with lying about US genocid…
ytc_UgxqF07Ft…
G
ai needs artist to copy and remake a new kind of art so they are basically ai is…
ytc_UgzSyBnT4…
Comment
13:02 Can somebody please explain to me why a program designed with the parameter of "aligning with American interests" is considered flawed for choosing to remove what is assessed to be a critical threat to those interests? The AI doesn't know or understand anything, it is an artificial computer program designed by humans, with coded goals and parameters. If the goal wasn't to protect American interests, or the CEO wasn't a threat to those interests, would we get the same outcome?
I only ask because I worry we project far too much of our conscience and biased decision-making onto a literal computer program. In my eyes, if you tell a program to save lives, but take them if they threaten an objective, then I don't see why we would be surprised if it does exactly that.
I must add that I have very little knowledge about AI and am not asking for arguements' sake, but merely ask out of ignorance and hoping to have an explanation of where my thought process is flawed. I appreciate anyone who can help 🙏
youtube
AI Governance
2025-08-28T13:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwmUaXBvHVLZXigRT54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfrWTiejZmlkI7aYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzovDY7oF-khB_V0fh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxtEnmbWr56eRVmB4F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwH1gDAg2cljXYxdeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]