Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I program embedded code for IC chips and microchips. I also write drivers for n…
ytc_UgzZU9wdP…
G
Erm a little bit late with that since the overriding promise before he was elect…
rdc_j4x0s6o
G
So AI will automate "Bullshit Jobs" rather than simplifying processes it will au…
ytc_UgxUnj8Bs…
G
So recently I saw an attempt at making a 3D art AI, ie generates 3D models. And …
ytc_UgwwVT6oL…
G
The US accidents you cite were from before full self driving existed. The earlie…
ytc_UgzJpDsZ8…
G
If you havent had the folks from The Center for Humane Technology, you should th…
ytc_UgyP16Q9X…
G
If AI creates 20% unemployment or more the economy will crash . During the Great…
ytc_UgyLhDjUP…
G
Yup. Google couldn’t take the risks of releasing a product that is so crazy and…
rdc_jplz6gy
Comment
Anthropic is way better at "safety" than Musk's Grok and no worse than OpenAI. The Defense Department has threatened Anthropic with a ban if it continues to insist on its AI products, prohibiting aiding in the surveillance of citizens and completely automating lethal weapons systems. That means other companies can't use Anthropic products in their systems sold to the government, either. Courtesy of supreme kook, Pete Hegseth, Secretary of War.
youtube
2026-02-17T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwiwepY7kb9NeU-a594AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTNmjD40Zm6Apad3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8gkQXj8tjdIJtQBp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9npC8yyf_S6a7_LV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgysDTfOEBi7FiNKWrx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxaZkGd833MMSiEMut4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgycgxUHxBVvWEzhTfx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjJYjJ4dttInQ8n7h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYlIjdurJfTEwodtd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhDqDYssV-40rmMed4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}]