Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@samdoesart how do you feel about using gen AI as a reference or for inspiration…
ytc_Ugx-I827w…
G
In the movie “She” I love that she rejects her user for an Ai version of Alan Wa…
ytc_UgwuqdBZN…
G
Anyone interested in leading an AI dating and relationship ecosystem? Looking fo…
ytc_Ugy0JMWjD…
G
I remember a couple months ago when I asked GPT for a brief summary on an occupa…
ytc_UgxESn0bW…
G
Wow, going into the debate, 67% believe AI research(!) is an existential risk. L…
ytc_UgzprZVcm…
G
The man says elon musk has no morals, and yet he is the only big tech CEO that w…
ytc_Ugze8LurW…
G
Someone has to explain how we plan to deal with automation taking over everythin…
rdc_ogt0k1z
G
OK, haven’t seen this video. I realize one man and only one man is going to be t…
ytc_UgzgO2IRB…
Comment
I think that a big problem in the AI safety discourse at the moment is that we are constantly begging the question of whether or not AI thinks or understands what it's producing. Because LLMs can do things that humans do by thinking, it's easier to imagine that LLMs are also thinking than to imagine how a mathematical model could produce such a convincing facsimile of thought and understanding.
I do, however, also think that AI could still be incredibly dangerous precisely because of its lack of true intelligence. If we give AI agents control over important/dangerous things - which a lot of people seem very eager to do - we probably can't trust them to make the types of intelligent decisions humans would and that could lead to some really bad outcomes. Unfortunately, I see very little discussion of the risks of currently existing AI technologies and implementations from supposed AI safety researchers/activists.
youtube
AI Moral Status
2025-10-30T21:2…
♥ 82
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzrmdAGaBxHu3fE2od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh9VyDP4iVV4TeNBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Re-k0YctHhspmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyU_k2lO_vHRhcHj_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmjL-k5k3XIV8Io2x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS1AlKfeyyTFQg8YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwiJD32RVEZUWYMVH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMf-EdlaHrsKhZwep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmiJxClhPU4ivMYwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi0OVPnLvo5kXdA8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]