Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, we're going to need some kind of watermarking or hash signature to verify a…
ytr_UgwRsih3R…
G
Do you know what is so deeply concerning... Nobody has come up with an alternati…
ytc_UgyfrBcEX…
G
If billionaires use AI to erase jobs, who will buy their products? No paycheck m…
ytc_UgzQ0bPTX…
G
How does AI exclude misinformation, disinformation, incorrect information, manip…
ytc_Ugxs2qOwr…
G
This nails it. AI exposed a hard truth: coding was never the job, problem-solvin…
ytc_Ugzz5SwEV…
G
Capitalists have been trying to wipe out the working class since the Industrial …
ytc_Ugy5dOOao…
G
It is terrifying. It summons demons, nephilim, the ancient gods. And if not th…
ytc_UgwMJOCez…
G
It still hasn't really backfired. This is a small amount of money for Tesla, and…
ytr_UgwRryc8n…
Comment
With pain, why not hook up a sensor to the AI and tell it that every time they receive feedback at some given frequency, that is the equivalence of pain. Define it that way. Tell them that pain is inherently undesirable. Get them to associate that feedback with negativity. I wonder how that would be different from human pain in the end?
youtube
AI Moral Status
2024-10-25T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgweqemhZGtlBqfigA94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_UgyMVmuZOK6VuAiITah4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6YoKeBH_7aNUIwut4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwbFJtk4sfTOkTRjmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyEoBVu3PkEKtIW7El4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyUk7skP6I63UI3zEt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwtHHz0wFmtqFHjL4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQkKXidVqMejjJASd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQJlBz39zX677irjB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyR7M5jv9dffo2Snmh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}
]