Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Stopping AI from being superior won't work. We will all need to become analogs o…
ytc_UgxF8frAK…
G
We appreciate your comment! If you're intrigued by advanced AI interactions, fee…
ytr_UgyWpwoQG…
G
AI is aligned with the objectives of its creators. It's not about consciousness.…
ytr_Ugzt4vzUe…
G
Quick to blame face recognition technology which I agree has issues, but you're …
ytc_UgzKbmS2A…
G
so theyre training models to investigate crime ... by teaching it to do crime? s…
ytc_Ugx-gVTyV…
G
Hey @igorhoffmann9178, thanks for your comment! "Херачьте дальше "Искусственный …
ytr_UgxUKlA2F…
G
We will know an AI has developed consciousness when it commits suicide. No consc…
ytc_UgyxBSE-A…
G
No, they will never replace artist because what we value of their art is their o…
ytc_Ugy-0sTS9…
Comment
Anthropic is a human‑centered AI company that has consistently put ethical limits ahead of political convenience and lucrative military contracts. From the beginning, it has prioritized safety, human oversight, and civil rights by refusing uses such as fully autonomous weapons or mass domestic surveillance, even under heavy pressure from the Pentagon and the Trump administration. Kudos to them for drawing a clear ethical line and actually sticking to it when it matters most.
youtube
2026-02-28T03:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwCqJIrVBorfoEyWV14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugymph3yJeA0VKzsvxR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwLCeV_qrSqiYXRdsx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzr3dZwVWlj4S3_-2Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwvyIy03GP1PTcQSD94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgynRW0a_gvgiX50szt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdZ3HlRM8HpC4dFCl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz8oaOmMN-WtEdYqaR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxc-O6USiO6kjIaonx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUDVq3Vbe8FR6ckth4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]