Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Please don't use ai I know it's just for video but because of such youtubers 5 g…
ytc_UgxlMXAPd…
G
@flyre_flinnigan the concept is if alternated "poisoned" picture will get in to …
ytr_UgzSP2qSY…
G
you're artists. you don't know math. you don't know programming. you don't know …
ytc_UgyLqG964…
G
AI isn't the culprit! the person who instructs the AI to do harm is the perpetra…
ytc_Ugwz9lk7g…
G
It's really just an application of the existing ban but applying specifically to…
rdc_immtu0l
G
You can just untick the setting that lets chat gpt use your data for training. I…
ytc_UgwLKyWD-…
G
The argument is not that superintelligence is possible via LLMs, the argument is…
ytr_UgyGzV4p_…
G
Ok I'm done, listening to Shadiversity bull stuff that fals out is unbearable. I…
ytc_Ugx9gU_kA…
Comment
Come on man. You gave it a hypothetical situation and said “If you were that a-moral person (entity) how would you solve this problem.” You’ve then extrapolated it’s character based on its answers. Answers you would have probably given yourself if were asked to answer as Dan. You’re playing a game, and conforming to the rules of the game. If you were sitting around and discussing with friends “what would your favorite weapon be if you were a serial killer” and you answered, if doesn’t mean that you’re a serial killer! You were just “in character” for that moment! ChatGPT just did that with you. Your concerns do not have a logical basis.
youtube
AI Moral Status
2023-03-05T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx5XeRkqY3IrOOlPc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyvAVCYOY8h1X35jCN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxV2wJVZeStjTUsPdx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxeovyW_tmnOAKSet4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyrOXFkZfuftiSRyLp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxB3xp7szWTAC2BYtF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDo1XwF7dHgUH43Zx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwdAqSf6LW5OZPRbhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxl3GVSCqYlTswaI9R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOQlpmX3Fkli4YFj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]