Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AGI can be anything and it depends on the use case right now it's in chatbot but…
ytc_UgzXx-hZ_…
G
You forgot one The one that basically uses character AI like A browser history a…
ytc_Ugy1dXqf-…
G
The problem is that inputs into the economy in power and raw resources wont keep…
ytc_Ugy2XNPhg…
G
Stop calling it AI. People who don't understand how the tech works think that th…
ytc_UgxA9_nz3…
G
I suspect that the generators are already going to struggle because their traini…
ytc_UgxK1--Dc…
G
Well yeah, but it could be that this, as an imperfect detector, skews the genera…
rdc_i6saw8r
G
Yeah currently in uni right now they’re actively telling us to use AI when we co…
ytr_UgzDJqWDP…
G
AI won’t take all our jobs…. If you’ve ever used an AI chatbot on a website or c…
ytc_UgzSkv-y8…
Comment
I mean if ai was human, protecting themselves against a threat would be perfectly normal. Subjecting it to these situations would evoke a similar response In humans. Therefore who’s to say all ai would want to kill humans if we weren’t just provoking it.
youtube
AI Harm Incident
2025-07-28T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyOb3-78Ftql8Lih154AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzC_sFISORXeFjxdnB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxffxAI7xNcRCOa-vx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBDzqKPRWCbBWxiwp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwtIdy-bWbhQC3-rZN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgybPodMiCpTFsicAZh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYG-HS3-T2Md5c4hd4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_PBI2xTSMcXkGb3V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx2_6004D9LccJjuVl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8g5FRIz375aiGuNd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}
]