Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm with you Hank, I think there is still a difference of kind (in things like a…
ytc_Ugw6tbpjS…
G
Saying elon musk has no moral compass is fucking wild. Coming from a guy who won…
ytc_Ugz6OrKvz…
G
Ai will destroy this country. It cant spell can't pronounce words correctly and …
ytc_Ugw8Ge8aD…
G
OpenAI is a house of cards and their product and business development strategies…
rdc_m9gmbbr
G
As an English teacher learning languages using AI is just not good. People have …
ytc_UgxjbPIPr…
G
Now comes the part I hate, the high visibility vest, what about if wearing a big…
ytc_Ugz-idMa4…
G
Yes, I’m sure when half of us are unemployed and hounded by AI creditor and bure…
ytc_Ugy-mNqUA…
G
I anticipate AI music will become so overwhelming, like a tsunami, that there wi…
ytc_Ugw2Tqrj-…
Comment
5:00 some points of clarity--the computational difficulty of the AI training task is really irrelevant. The military can throw billions at super computers to train the AI models in advance. Whats important for the actual equipment is called inferencing, which has very different performance requirements and constraonts. Inferencing tends to be several orders of magnitude easier than training.
Moreover, the US military doesn't tend to slap Nvidia GPUs inside their military hardware. Once an AI model has been trained and developed, the military will turn that model into an inferencing ASIC--a hard ware chip with the model embedded into the chip. This chip is then hardened against electromagnetic interference and radiation. The military may also use FPGAs because of their ability to be upgraded on the fly, however, the performance, scalability, durability, and cost of an ASIC is vastly superior.
The key point here is that talking about Nvidia GPUs is mostly irrelevant for US military hardware except maybe during the training phase of the model.
Lastly, AI is a lot more than trained neural networks. Many tasks are better accomplished via a fast tree algorithm or regression analysis or some other statistical calculations.
youtube
2024-07-01T19:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxvkr18EvJ-JfzhXFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbBrYAJqwFQ8bORR14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxk6bNQS-5_5TW592B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz23Qm9ffqaaXl3TRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZ0CiFsz764eXP-jx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxk3YRfOnouQW20esh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxok8Q-Rpere_M2rUt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0KeTVSw5bo5VyRBB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9ZW2IyudNiRRiOo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwL_gB2UITIsd5ez2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]