Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is an art, and a science, to using AI creatively and more effectively. But…
ytc_UgxV3D9mK…
G
Sorry I'm lost! MAN has created AI not some alien from Mars..SO we're going to s…
ytc_UgxPtCNZJ…
G
> spend my disposable income on items that can't be measured in wealth inequ…
rdc_d7krwp9
G
Where the hell do you see transhuman? The him fetus is being sustain by the robo…
ytr_UgxQ2iQ0h…
G
So once 1 robot learn how to kill, steal, etc! All the robots will learn how to …
ytc_UgzCaocv2…
G
Chat Gpt and gemini are really bad at chess still. It's only specific chess prog…
ytc_UgxJ2h0Bf…
G
It all culminates into neuro-sama, who People _know_ is an AI, but want to belie…
ytc_Ugz_NsXXQ…
G
Tesla you know what AI is? It's Artificial Intelligence, their intelligence is m…
ytr_Ugwu5KhSa…
Comment
I think in the long term, autonomous weapons will be better and safer than human wielded weapons. We can program them not to feel fear, not to kill to protect itself. We can program them to not be racist. We can program them not to feel a primitive need to feel strong and violently put down anyone who disrespects them And that is in addition to them not getting tired or bored.
People have an abysmal track record when it comes to choosing when to use a weapon.
Of course, there are concerns, autonomous weapons are powerful, and if they can be hacked to easily that is a major problem, and if they are programmed by the same people who think bombing a wedding because someone there once had a conversation with a terrorist, that is a serious problem. But we are already bombing those weddings. We could instead program autonomous weapons to have far higher standards, and more respect for human life.
The advantages of well engineered autonomy are so great that it will worth doing, but it is worth taking things slowly, and being thoughtful and deliberate in how we build and deploy these systems.
youtube
2015-07-31T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugh4Y0du4dZ53ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj9S7He4JB8SngCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugg35B6pH_UxTngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UggYXi1L41uO13gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgixNw6ZI8gvlngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh6JtGH2uCypngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjKo0rMFtp5cHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UghUF7sRctjnPXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgiMWJhekJHrLHgCoAEC","responsibility":"government","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugjoe2j6EEqhTngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]