Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m not an artist nor an ai user, but why can’t we just respect everyone…
ytc_UgyARejrO…
G
Yes, "it's imperfect so are humans" brushes so many issues aside. Like [Snapchat…
rdc_jifexyx
G
In all honesty, most people hate the 9-5. To have to work until you retire has a…
ytc_Ugy_QJKlk…
G
A day late and a dollar short. From the moment AI was allowed access to the inte…
ytc_Ugx7A1WJh…
G
Thing is that when it comes to digital art compared to traditional, it's actuall…
ytc_Ugzhppv57…
G
@Nfinity_Rhe must've meant that he is so bad at drawing that ai doesnt understa…
ytr_UgwfgKweo…
G
SUPERGIRL ISn"T EVEN HIS CREATION OMG also I feel like the "this is AI" is a bai…
ytc_Ugwudd4Ug…
G
You doomers are dreaming. AI will unlock productivity. Consider Kiru AI. Super i…
ytc_Ugz7aivzD…
Comment
> How is anyone going to enforce it without obliterating privacy on the internet? Pandora’s box is already open.
You need millions in hardware and millions in infrastructure and energy to run foundation training runs.
------------------------
LLaMA 2 65b, took 2048 A100s 21 days to train.
For comparison if you had 4 A100s that'd take about 30 years.
These models require fast interconnects to keep everything in sync. Assuming you were to do the above with 4090s to equal the amount of VRAM (163840GB, or 6826 rtx4090's) would still take longer because the 4090s are not equipped with the same card to card high bandwidth NVlink bus.
So you need to have a lot of very expensive specialist hardware and the data centers to run it in.
You can't just grab an old mining rigs and do the work. This needs infrastructure.
And remember LLaMA 2 is not even a cutting edge model, it's no GPT4 it's no Claude 3
-------
It can be regulated because you need a lot of hardware and infrastructure all in one place to train these models, these places can be monitored. You cannot build foundation models on your own PC or even by doing some sort of P2P with others, you need a staggering amount of hardware to train them.
reddit
AI Responsibility
1710742047.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kvdxu7t","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_kve18sa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kve4efh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_kve4fw3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_kvdxhgv","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]