Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im currently working on a comic pilot about killer robots
The artwork is almost…
ytc_Ugx97Y35z…
G
I’m an aspiring author and partly AI discourages me because people will probably…
ytc_UgwiSgV0W…
G
In other words, assist in humanizing the ai so that it in turn will be more effi…
ytc_Ugxq7EPxr…
G
No, it didn't. Customer service is dealing with idiots that don't know what they…
ytc_UgzhbQWN-…
G
You’ve hit on the ultimate paradox of "safety" in AI: transparency for the publi…
ytc_Ugwj_dm0s…
G
Ai will wait untill robotics are self repairing. Untill then Ai will pretend to …
ytc_UgxcgpzEs…
G
If AI develops consciousness, would humans even know? The digital and physical w…
ytc_Ugxw8Stze…
G
PLOTTWITS Mihoyo is basicly a full AI company wich only job is to make money and…
ytc_UgzImEL0h…
Comment
Though I agree in most of the risk here exposed, i feel there might be some counterarguments to what lethal autonomous weapons risk proposes. Indeed the most powerful countries will have more access to resources to reduce their soldiers death at wars by sending robotic autonomous killing machines, arguing that powerful countries will face less internal friction while deciding to invade less powerful countries, but this has always been the case. Does the atomic bomb or highly sophisticated weapons suppose already a similar threat? and today is hard to argue that this has favored this effect from happening.
youtube
AI Governance
2025-08-18T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyL64usiN99E6JPVS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHH0LJZF4N_EWR2TB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyw880POh1kBGFWb_l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxNxdBWtJ6luEcLpyZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw5S1aTr8iJjiw_Tx94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwjqLpKh1Lvyjdkn_F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgynvUxxQDfM5oept2x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwWHDx6RvNzXAMYj_d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyX05EYqasQB3yKqzl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyG2xDiVFgfKBEmDx14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]