Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe if the rich didn’t lobby for coal and against clean energy we wouldn’t hav…
rdc_et82b5s
G
There's one issue with the concern about who's going to buy everything: we curre…
ytc_UgwvrUToi…
G
Yeah, im okay with a robot doing the soul crushing job. Once enough labor is aut…
ytc_UgxvWzh40…
G
The Big Shitty Bill allows no restrictions on AI for 10 years. Ask yourself why…
ytc_Ugw5tfUZa…
G
From what I can tell, execs often thinkAI means general intelligence, not “super…
ytc_UgwQdeDMK…
G
There is knowledge and then there is wisdom. Will AI have the wisdom to question…
ytc_UgzCHI2j3…
G
@AverageGuy87-r7k You can't underestimate how FAST technology improves itself ov…
ytr_Ugzml_hgy…
G
If stolen AI art isn't taken into account soon, it could potentially make every …
ytc_UgwfUheqB…
Comment
The people fearmongering about AI is almost exclusively from people who don't actually know anything about AI. The use of AI in an aerial weapon would be dealing with flight complexities a human can't (think error correction), or in target acquisition, it wouldn't be "this thing thinks and decides to bomb something." Fully autonomous UAVs that can be told to go to X location, take pictures, or even drop bombs on certain locations by GPS aren't using AI to make decisions, they're programmed very specifically, nothing short of a human giving them an order and them doing it, and they make far fewer mistakes than humans do. Your computer is running tens of millions of lines of code that some human built to tell your computer how to do thousands of tasks that it does on your behalf, far better than you ever could, making almost no mistakes. That's how we've made everything you use every day. That's what's in those drones. You don't just sit down and make an "AI drone" that now is a threat to humanity or something. The right analogy is to compare this to what happened with bombs. We had very basic bombs, they were kinetic. Either s2s ballistics like mortars or bombs you'd just drop out of a plane and they'd blow up wherever they landed. Then we built so-called "smart" bombs in addition to things like guided and fly-by-wire (remotely controlled) bombs. These bombs were able to take a location and used their own thrust manipulations to hit that target far more accurately than a human ever could by just eyeing it and dropping them with a ballistic trajectory. They didn't suddenly decide to start blowing us all up because the "smart" doesn't refer to intelligence. That's what's going on with UAVs, they're still controlled by humans, they do exactly and only exactly what humans tell them to do, they don't think, they have no intelligence, nothing does. Humanity has yet to build even the most rudimentary intelligence at all, everything we've built is designed by us to do something and it does exactly what we tell it to do, and that isn't likely to change in 1000 years, let alone 100 and absolutely not in 10-20.
youtube
2018-06-10T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw8VcCmDsHAb83DpnV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJhY0Q3whDnLh_vSd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHB1Qraoab6sUe-b14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw5ppOHeOI8NDasPex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmIQltZxx3SNHW2ix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxoLfLSLyAE_TFWUL14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy0IwWNwz5CbGgh4Cp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy1aRhd_2R2pWLhMO14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxf07ulLAVDBQ_GSCF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZYda85tBp1GUREC54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]