Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even at 10x safer than humans on average, autonomous vehicles will still crash a…
ytc_UgwXP-e2C…
G
People were opposed to relatively autonomous robots that could kill without huma…
rdc_oho6lo6
G
Can feed the hungry as fast? Can you make a robot to cure the sick? To make vast…
ytc_Ugy31sFPo…
G
The flaws haven’t made it to the media because the truth doesn’t make the magic …
ytc_UgyAEtpvd…
G
Seriously, you are using Upper Echelon as a reputable source based in facts. PUH…
ytc_UgzUEkfqk…
G
Makes sense, automate the warehouse, truck, burger flipper and cashier. Guess a …
ytc_UgwFGJvrf…
G
Here is my review:
There are more outputs, think of a wall e scenario. AI works …
ytc_UgzmKY1eC…
G
That's an interesting point! In the video, Sophia emphasizes her journey of lear…
ytr_UgzhlhxJ1…
Comment
I think weaponized AI is inevitable. Instead of human life, we would sacrifice autonomous drones, and casualties in wars would drop massively. However, it can be dangerous if it falls into the wrong hands. Solution? In-built systems that are made to save the victim's life. Companies that produce the automated weapons legally cannot create them without certain requirements.
Examples for requirements:
1. Disarming instead of killing if possible
2. Do not kill a person that is not fighting back or attempting to escape
3. Preferably aim for the legs
4. Report locations of injured victims
youtube
2019-06-07T14:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwt8sMdBrhPQyu_LUB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRxh4ObPNBNt67lh14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQoiHSIs4eldEWBpp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoUFxnh9VDPKRP5TV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyyYfDnFH8lVL1HuFx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgylFoBYaF-oMCnMV1l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXiT_wWXRxjIgkXOl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmKYJt_40W0-9SM6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy32y9cdwEaLZcfa2R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwJD2lznMWMukhxokd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]