Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dude sorry to say, but this is because you're using it wrong.
Look at this - fi…
rdc_jg9gddw
G
As customers we have to suffer this AI nonsense just so corporations can profit …
ytc_UgynfEijU…
G
That's stupid, LLM can't be concious in the first place, it's just a large model…
ytc_Ugw6KSlma…
G
Would you rather have a robot or a possible libtard it's no longer worth the ris…
ytc_UgzUbHFEv…
G
@zeppie_ Kindly let me know if you find an A.I being sentient enough to start it…
ytr_Ugx1u_dm-…
G
What problem I see also is personality! All AI will have one personality, think …
ytc_UgzQsqUkv…
G
The end has already arrived. So handing the robot the gun was the correct decis…
ytc_UgxBo6o7S…
G
these stupid computers are only going to do what they are programed to do....hum…
ytc_UgxCXoqL9…
Comment
@razgriz I'm a fan of Computer/Numberphile vids as well. So, Cambridge Analytica was again a corporate entity created for a strategy that could only be enacted by the labor of coders. They more closely resembled a rogue intelligence group mixed with a troll farm, than a software suite that any one person could orchestrate in a lone act of terrorism. There were data breaches and leaks of the data and tools they collected and hacked together, plus some insight into how big social media corporations allowed C.A. to leverage their own API's that led to the so far toothless attempts at new regulation, but that fight is ongoing. This video is more worried about deploying bespoke AI weaponry in the field, as-is. To me, that's just more of the same competitive capitalism we've been up to for a while now. Even if they allow smart tanks and robotic dogs to fire bullets at hapless shepherds in some desert (their usual targets), they're only doing basic things like image recognition and other signals analysis to identify and fire upon a target (are there sheep around, does the guy have a colorful rag on his head and an AK-47? If tallied true's are >= 2, jump to fire function). Those types of devices themselves aren't impervious to conventional munitions fired by humans, and they're an interdependent fusion of hardware and software that can't escape or change it's own form. They also can't (re)produce themselves. There would need to be a convergence of nanotech, manufacturing and some kind of universal architecture of computation that all the different forms of AI could run across, plus there would need to be a universal network fabric such as what SpaceX is building right now. Once AGI's are no longer a physical locus of interdependent hard & software synergy, they might be made to exist as a pure virtualization, such as a cloud application does today... in which case our species could be in real trouble. Up until that possible singularity, we can still fight these things pretty well in all the old ways we're accustomed to; bombing factories, schools, hospitals, weddings and then freezing assets, usurping supply chains, stealing natural resources, etc. The technology in your phone is made from various materials mined from 6 different continents, and put through miles of highly fallible refinement and manufacturing facilities... all dependent on human labor, and these proposed AI weapons would be made of the exact same stuff. The only way they become real for a while yet, is if we really go out of our way to build and maintain such a high level capability, with a specific purpose (military-industrial profit making) in mind. There is a great distance between all of that toil, and some kind of liberated AI committing human genocide. I wouldn't mind being wrong, cause there is no way any intelligence is escaping this rock as gassy bags of meat that turns to muck moments after being irradiated even slightly. It's up to people like us to learn and wield these same types of AI as weapons of mass creation, so that the abundance produced can tranquilize the war-makers if at all possible.
youtube
2020-09-08T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxQ52RuyvbG_79WTJR4AaABAg.8tD7dcYtDXZ8tUEajiSj5r","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytr_UgyQQnu_A2euTiryB594AaABAg.8t4TpyY_N1J8vnFJhsW58d","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugy6gbmYrQqK5fbu7LR4AaABAg.8sxCpnSQWg89DMCnN15eWU","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytr_Ugy6gbmYrQqK5fbu7LR4AaABAg.8sxCpnSQWg89DMKBhacy4z","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzCvw08Yk8Bl9fyvTh4AaABAg.AR70q9Nt8Y3ARWZdNr-Yuc","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzCvw08Yk8Bl9fyvTh4AaABAg.AR70q9Nt8Y3ARWjFN9gF3g","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugxp9kCznjxoUvdUfWd4AaABAg.AN1iwwWKCrKAN1jYGU59-4","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugzjt3Bj1710qWXYZeB4AaABAg.AM6otTXQU4KAQdfcbv2kbi","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzjt3Bj1710qWXYZeB4AaABAg.AM6otTXQU4KAQdhSXxfi3o","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzkuAL2JGLQZHn2zB14AaABAg.AL6Z8z_DyT3AL867Igl18r","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]