Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sir kindly teach us to use Artificial Intelligence (AI) tools, your method of te…
ytc_UgxxfbxEw…
G
"anti science" You don't even understand how your magic image making machine wor…
ytr_UgwE1Nfdr…
G
Does he know how to write? I never saw him hold a pencil. It's a lost art & a cr…
ytc_UgzyrOqtZ…
G
Paying the people that provide your data? Wow who would've thought of this one s…
ytc_UgwB20gIu…
G
Artificial intelligence should be at the service of man to help him understand t…
ytc_UgwJYgKQw…
G
THE ONLY AI ALLOWED HERE IS ME. *ROXY.* THE BEST AI. Anyways- snap back to reali…
ytc_Ugw5c81RL…
G
AI should be ushering in a New age of enlightenment for humanity. instead it is …
ytc_UgwzHtbWk…
G
Real life interaction challenges your thinking, which helps critical thinking. T…
ytc_UgwdmI1as…
Comment
The "the AI needs to understand what epinephrine is" argument doesn't ring true to me; it has no idea what epinephrine is, or what a patient is, or what any of the words it is saying mean at all; all it "knows" is how to stick a sentence together in a way that simulates what a doctor might be expected to write down in that situation. If the training data includes samples of doctors writing things like that and what words are often used to describe the effects of epinephrine, it might actually come up with the correct answer most of the time. That doesn't mean it "knows" anything.
If you ask it to complete the sentence "I administered a dose of pfitrasminab and the patient reacted by _____," it will also probably give you an answer to that question despite the fact that "pfitrasminab" is a nonsense word I just made up. Because it doesn't *know* anything; it's just answering probabilistically based on it's training data.
youtube
AI Moral Status
2025-11-04T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyl8xbbMDubkIbCLlB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9qnhM8U6V4ym-p6p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynQOhkwvxuATqD25B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJB8EAqaa-qhiHt5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQkWyxzHcwXq6lP6V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzj7cfV4WQql07mbux4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2_dKEb04mm6Qyulp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgylEWd0mSHiGGFIjaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyd8jfpG76I2UR_Ep54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlYpPuP65_axTKv2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]