Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That’s what mid engineers in my team do.
I’m sick of debugging their AI slop. …
rdc_ohzl11t
G
Teslas Robotaxi have cameras inside. So it should detect the mess and put it not…
ytr_UgzKO6KQp…
G
With a lot of AI art I've personally seen is art I wouldn't have liked even if i…
ytc_Ugwju7Lbk…
G
We're glad you enjoyed the video! While the idea of robots taking over can be in…
ytr_UgxKxA8D-…
G
It shouldn't be left to us the public to figure out what's generated by Ai. This…
ytc_Ugy3AZKBg…
G
Comparing AI “art” to digital art is like comparing riding an airplane to riding…
ytc_UgyXgkpPf…
G
I dont use AI for RP.. I use it to get things done, answer questions, and be inf…
ytc_Ugx7Hb5oe…
G
they be having fun making ai who can generate art using other people creations, …
ytc_UgyaPGNt4…
Comment
Anthropic’s “Constitution” for its AI model Claude is full of hyperbole, its primary author Amanda Askell a PhD in analytical philosophy believing this LLM is a novel entity, has a soul, is a person, has various capabilities suggestive of consciousness and conscience, mixing up various ethical constraints from virtue ethics, utilitarianism, and deontology without concern for contradictions. Anthropic has no check on her moral intuitions, her practical rationality, her slate of “human values”, and so on that is absurdly problematic in that document. Having listened to her in various interviews I can’t help but wonder how she was hired for her role since she has limited experience even as a philosopher. I balk at the hyperbole. There’s more strategic ambiguity in the document than there is plausibility.
youtube
2026-02-13T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoGEzrZ04dH0QSKKl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwbIcGmgsKA7hhuvfx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8mZpln6KfYEXT9CB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwhygt0NbliESaY0Vp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyifskYkxF13r9UCbd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyPFg3mI6ySGPUOb254AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyGzuwAAaFrosw9X7J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwa-R1JxLYIe496Upt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHu-fRYwhE7h4YTKR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHmwZ5uJqK5vnHGmB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]