Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For being smart these idiots are creating the worst enemy of our future. Serious…
ytc_Ugzjv6s5A…
G
I think earning a bachelor's degree is still relevant. However, you need to choo…
ytc_UgxR4gOGA…
G
I use Teleporta, a video call service integrated with AI, which significantly bo…
ytc_UgzpViHgT…
G
Low bow to you, Dr. Mustafa Suleyman. And yes, they are a new species. Thank you…
ytc_UgwnQC6or…
G
i want to be the best robot . what does it mean for you to be human .. ? it…
ytc_Ugxm-Wf4z…
G
This is why investors should invest in Chinese AI companies, where you don't hav…
ytc_UgwRzA3hu…
G
Can you guys cover the implications on kids now that AI has been pushed by big t…
ytc_Ugz7m1_-3…
G
Idk man, Angel Engine is by far the best art piece I have ever seen made using A…
ytc_Ugw792JJn…
Comment
There are two problems: technical and social.
Technically, previous systems have all had control transferred to a human in the trigger loop. I don't know about KAIST, but the Korean Defense Department (ADD) is funding both robotics and AI. The way I heard it phrased during an explanation of their current research directions was "but of course we cannot actually hook it up because of the ethical issues". Right, like once the hardware and software are both developed they will be strongly firewalled.
Socially, my guess would be that either the KAIST guys were bragging at an international conference, or they were submitting military-funded conference papers about distinguishing types of clothing with AI (or something else suitably suspicious) and other researchers put 2 and 2 together. Money is on the former: most professors are not shy about talking shop, because talking leads to international collaborations.
There is another possible social aspect to this: most countries don't publish research papers on actual weapons development. Koreans see it as low-hanging fruit, and so they do. As in, they will see from PR demos that the US developed a system, but didn't publish papers on it, so they re-develop the idea, and publish papers themselves. Could be very problematic when the technology is uniform-distinguishing AI, and the paper contains pseudocode...
reddit
Cross-Cultural
1522949919.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dwuf98t","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_dwujpp2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_dwv2iti","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_dwv4fp2","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"rdc_dwvpg1u","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}
]