Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And ain’t got no one but to blame but them selves. There’s a reason AI is taking…
ytc_UgwV7HWX0…
G
Man, it sure would be a shame if artists all over the internet began to create t…
ytc_UgxUh4y2Z…
G
Hmmm what about milking farms? Nobody would want to put a robot where animals ca…
ytc_UgxOqDT3Q…
G
Never go to Nevada casinos is my answer. If companies use AI I would just not s…
ytc_UgyQwF0Iy…
G
If a robot does somehow gain free will and a consciousness, then yes it does des…
ytc_UgiXdCewS…
G
Finally ai will fall, the time of greatness has come, soon all of the pathetic a…
ytc_Ugwtpz_oz…
G
I had imagined the therapy that would be needed when a sex-bot's face comes off …
ytc_Ugx-zf1Oh…
G
I think Trump might beat AI to it! AI algorithms are tools; AI algorithms detect…
ytc_UgxTbK2BN…
Comment
Every time I see TYT do a science/tech story, I cringe, because almost to a man they treat the subject matter like 90 year olds who think cars are new-fangled technology.
TYT needs a dedicated science/tech member, I've said this for a long time, and I'm willing to do the work for free so that Cenk/Ana/etc. don't look like a bunch of morons when they do these stories.
When Musk and Hawking advocate not developing AI for combat purposes, they're not saying that because they think it'll spawn a Skynet-esque robot army dedicated to our destruction; they're saying it because it would just further desensitize the 1st world to the effects of war. It may be controversial to say this, but you kinda have to put people's lives in danger on the battlefield, even if you don't necessarily have to, because the application of force should always be risky, so you don't use the force capriciously. It's bad enough with drones, because then the operator is detached from the risk of the actions, and psychologically this makes him or her care less about the targets and who they might be, and it makes the chain of command care less about who the targets are because there's no risk to their personnel, so they can toss missiles around willy-nilly because at worst, all you're doing is killing a bunch of civilians who are powerless to stop you militarily or politically. Using autonomous combat systems would remove even that little bit of humanity left in the system, so all you'd have to do is toss some of them out into the field and walk away, never really having to know the outcome, aside from a statistical analysis of the machine's performance.
Musk and Hawking are advocating moving away from this possibility for a couple reasons: one, they don't want AI and war to become synonymous, they don't want the resources that go into developing AI to be wasted on something like war, because there's plenty of other problems they'd be much better at solving, and because if autonomous combat machines become economically viable (which they're certain to become if nothing changes), then war will literally become a video game, declared and fought against those who can't produce or afford disposable soldiers, and have to field a fighting force out of its own sons and daughters. War will then become, not a option of last resort, but the first option considered against any nation other than the 1st world powers, dropped at a moment's notice in order to enforce the will of the haves against the have nots.
I'll say this again, TYT DESPERATELY NEEDS SCIENCE/TECH ADVISORS, because Cenk and Ana get so wrapped up in the whole Terminator rigamarole to focus on the deeper implications and subtler nuances.
They're supposed to be the hip and edgy news channel, but they turn into old fogeys when it comes to stories like this.
youtube
2015-07-30T09:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugiq7KJ6T100kXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh365DWKmrW13gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugiq02-FnzwitXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgjPJM6JnogjQ3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggFv-a3g2noD3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggsIQHlAlQBJHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugg_2NbNeYN8ZXgCoAEC","responsibility":"none","reasoning":"resignation","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugiz180S0BWrMXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugjz03jBITPdiXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugg1h-_yIXiDuXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]