Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
FSD has been around for a while with Waymo and other companies. And if you’re ta…
ytr_UgwFKhqdy…
G
Hinton definitely showed his progressive leftism with his TDS and obvious Musk h…
ytc_UgzZPMFdn…
G
Boeing ditched its traditionally engineer-led business for MBAs and their planes…
ytc_UgzyzYv33…
G
Garbage like this is going on, and then you have people (mostly sexist men) out …
ytc_Ugz5jTFv3…
G
Short story: human live is not as important as americam interests, that is what …
ytc_Ugz_I-736…
G
All ChatGPT has to do is provide the next follow-up answer that is consistent wi…
rdc_kj58pu3
G
Deep Greed Of Whealth In Capital Classes And Capitalist Family Like To Use Machi…
ytc_UgwDL3hRi…
G
WoW that is absolutely INSANE!!!to really see that Robot sticking and moving lik…
ytc_UgzFPCfse…
Comment
imo we really need to stay away from AI _confirming_ targets. AI _identifying_ potential targets is fine, but there's no substitute for human oversight when it comes to target _confirmation._ Letting AI do that will result in rampant civilian casualties and friendly fire, and that's an unacceptable cost even _aside_ from the obvious moral problem of a lack of friction when killing.
youtube
2024-08-11T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZSMy9Cr_Qkg5s2tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxbTIG_Y-zZitMkj6N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEsNZOEk2jgwFpVWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpkVlxyw1x_j5d3cd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx14l9tg_qTKs8OtJF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzd4oUUkq7252Imm-R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_qjtNEVqE4CTrkyJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxwXYQSGdXzinxcV4R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDB_PqkFhWrKQ8q_h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxTj-G6m0-vYnpvRLV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]