Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI he's using might have a soul, but the guy obsessively prompting anti-arti…
ytc_UgzwNYH2d…
G
If ai advances even more and switches to robotic machines, we are going to be ex…
ytc_UgzVDMIoe…
G
*sighs* I know these channels...reporters...programmers just want attention,or f…
ytc_UgwRIgn7S…
G
Just to stop you for a moment, but,
Are we for AI or against it?
Because, person…
ytc_Ugy7y9xmS…
G
Ai is doing nefarious or malicious things because, who programmed it? People did…
ytc_UgyYF40za…
G
If a person asks Chatgpt to write an article -why would they remember the materi…
ytc_UgyHSMpW2…
G
When the Godfather of AI says 'We F'd Up' I think we should listen. But let's be…
ytc_UgxqA9hFD…
G
They can’t and don’t have to because we are already certain of AGIs imminent arr…
ytr_Ugwmsuws_…
Comment
I love the video, but I don't like how the website supports a _ban_ on autonomous weapons, as if that's going to work against bad actors rather than simply making good actors defenceless against them. What I think instead needs to be made, is some kind of defence against the drones. One can't stop the development of technology, but one can certainly use technology to protect against technology!
youtube
AI Harm Incident
2019-09-18T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw9aPA7K3GzrUCm2NB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxjEXQ6R0TfuQzPs3F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwNK8jXni6ZVTrCdrh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgywtifX_xuFpe13wrh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxG99apIN34CmfJ1Z14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwYdaJn7RDaKGLYjs14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHw6_0WYQ_e-O8S9N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyWt54KORazNmpsDJR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwlDO5SwSWHSgQizDh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugysw8_2kK643-TqQ5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]