Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The answer is simple. AI is a modern Ouija board. Spirits can access it at well …
ytc_UgyUX0k4n…
G
At my company they are pushing product and design to push code using ai. The is…
rdc_oi3nr66
G
Someone should really do something about all the selfish and greedy artists who …
ytc_UgxVmWM0A…
G
I wouldn't label this surveillance capitalism; It's more like surveillance consu…
ytc_UgyDupoNk…
G
We should just call AI “Artists” AI “USERS”. Because they use it but they are NO…
ytc_UgxB5RQtc…
G
There will be big demand for all the trades for a long time, amd really this guy…
ytc_Ugw-dJSMz…
G
Hope he wins his suit id be suing the casino, the PD, the city, and the so calle…
ytc_UgyRXXht6…
G
Absolutely! Deep fake is spot on my friend! Denmark sounds like heaven compared …
ytr_Ugycjt_cE…
Comment
I don't think LAWs themselves as described here are really that scary. What scares me is not when an AI identifies a target from training data, it's when an AI makes higher-level strategic decisions around where its fleet of autonomous weapons goes and operates as well as their engagement parameters. It's the next logical step after making the weapons decide how to engage targets, but it's also a seriously difficult thing to make happen. I would compare LAWs that just identify targets as being like muskets or other single-shot gunpowder weapons while AI that makes strategic and operational decisions to be like a self-loading firearm. They both use gunpowder/AI and they're both dominant, but the second is very obviously a huge advancement from the first; it requires a lot of small advancements to become possible. But when it does? I'm scared.
youtube
2024-07-03T00:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyrfSOqNX0lDZn1nyR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzNHKBWwYs2JZI27p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzV9nyMDjmRs2iNTs94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyDRziKgo1zUyF_rqp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwo7cL3udsdsb2687V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLa4PNGPTQJRqz5tt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOJhsc7lmkcHenonF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwwmn_MHcaY-SHSr2d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnPrlRV9YpgQHJK_B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwrtQPIu0q07orwPjp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}
]