Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI bots can't rape or murder you, they're worrying about the wrong thing, and an…
rdc_ohwwxsh
G
I can't emphasize enough how right Musk, Hawking, etc are. And it takes just one…
ytc_Ugiw0vwfo…
G
Good lord this comment section is cooked.
Yall do know that a person using an A…
ytc_UgxIu97KA…
G
Posted here yesterday that Anthropic's decision would not block Hegseth from usi…
ytc_UgwH2frjh…
G
This is why IBM stated that precision regulation is needed. For example, regulat…
ytr_UgxIa-6fP…
G
It's really disappointing to hear the "they'll make new jobs" line. No, you've c…
ytr_UgzcORwrP…
G
I personally trained my own replacement in code by uploading to Github. The AI n…
ytc_UgygzW2xy…
G
Sora failing doesn't mean AI video won't revolutionize Hollywood — it just means…
rdc_ocpj1ya
Comment
An algorithm that can predict and control human response is too alluring not to be pursued. Even this video supposedly warning us about the threat of AI speculates how useful it might be to have AI influence human thought 19:46 . If your faction allows another to achieve control first every part of your philosophy will at the mercy of the other party. I do not see how following this goal of control can be avoided.
youtube
AI Moral Status
2025-04-29T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzmY6aI6mnhK_AtEfN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMNMH5i-z85JHiGGB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWNYdNoRQHpds8zHB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYKk9CO9cz8-ByeWR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxxj59YYRNRLNIAlg54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxcGOEU3EFsY-R6XQB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxYAKBWsevLcq1gEsR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx5QPvOOeZCDYLT7zp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwbYzlzlrydQmJ6qcd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugybc7Rz_O0CSVB5tet4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}
]