Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is a fucking stupid argument against self driving cars I am tired of seeing…
rdc_dmp6sw7
G
Velvet Sundown seemed to make an impact. Looks like AI has made an impact and i…
ytc_UgwjyAppk…
G
In 1995, Ai Wewei smashed a 2,000 year old Han Dynasty vase saying, “It’s powerf…
rdc_loumlfm
G
1 AI Generates images for free, And less time. 2 people charge to design images …
ytc_Ugxar6REW…
G
Says the one who is planning to make fully automatic cars which doesn't need dri…
ytc_Ugzl1BqkW…
G
Your desire for "AIRnG series on regulations and compliance" reveals precisely h…
ytr_UgzybQpUC…
G
Yeah even people that I would consider leftist and have empathy but they hate AI…
ytc_UgwPjOF4p…
G
I can offer another example, yet I spend 0 time on social media, are 59 years ol…
ytc_Ugx8MPW_L…
Comment
The best thing about estimating the risk is that all of this is based off vibes. The AI could easily be sandbagging especially if it knows it's being tested and something that has a risk of death at 20% is really more like 80-90%. It's really good at gassing people up. Why wouldn't it be good a convincing people it isn't a threat when it absolutely is?
youtube
AI Moral Status
2025-10-31T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyrLmXS_4NYjTFH_RR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPjuejRpwjSKBM95J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_7kczlUOeuS17KwB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy7FfsosNPLh8iNz014AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzAOJNghlB8HUGG36V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx-1GfAey9VjkTGDzh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwcoHB10IZ3VXSWzUB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxT9KDuyL2RG0K2VXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz-_lMNf5m98fTgUux4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5UP4IbTJu5mLVZsx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]