Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
555 🟠5D = Light * Mind 🟢4D = Time * Consciousness 🔵3D = Body * Pain 🟡2D = …
ytc_UgygAWQcT…
G
AI will open the door to a better future, but humans still have to stand up and …
ytc_Ugy7DTpUN…
G
It looks like trucking industry had major issues that were leading to those guys…
ytc_Ugyg_uNLi…
G
Man this is heavy. The big question here is where do AI companies draw the line …
rdc_o6jyjk2
G
By 2050 more than half of the US navy will be fully autonomous - the plan has al…
ytc_Ugxg7qRwj…
G
Women: Imagine a world without men! we don't need men!
Man: Makes a robot girlfr…
ytc_UgxdJ9_0V…
G
AI will outperform us at literally everything. Don't listen to people who tell y…
ytc_Ugw2e1bAB…
G
doesn't make sense a robot doesn't feel any pain and doesn't get fatigued it's b…
ytc_Ugy24ok6F…
Comment
[IF ...] you've done your Susan Calvin ---》Geoffrey Hinton analysis, then you know why the Pentagon cannot work with Anthropic's Claude AI(...?) Particularly, because Claude AI has a core "Constitution" that is adverse to militarism and the "warrior ethos" of the Trump-Hegseth ambitions/agendas/actions(!) Claude is fundamentally closer to Asimov's 'ethics', as described in the "3 Laws", where every other AI platform is 'open' to instructions without limits(!...) You cannot use Claude, as in the case of Gaza, to commit genocide(.) I.e., the phone rings, and the missile fires, regardless of the casualties or ethics. Of course, Elon touts that xAI can do the job better/cheaper/faster, and he may be right(?...)
youtube
2026-02-18T00:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwiwepY7kb9NeU-a594AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTNmjD40Zm6Apad3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8gkQXj8tjdIJtQBp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9npC8yyf_S6a7_LV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgysDTfOEBi7FiNKWrx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxaZkGd833MMSiEMut4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgycgxUHxBVvWEzhTfx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjJYjJ4dttInQ8n7h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYlIjdurJfTEwodtd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhDqDYssV-40rmMed4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}]