Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ah the things that increase devx also make llms work better. Documentation, usag…
rdc_oaeljuq
G
I really think we need to rework our financial system. I feel this subject has a…
ytc_UgxhQ90eK…
G
The chatbots are glorified autocomplete text engines trained on an unimaginably …
ytc_UgxeXjZqN…
G
It's interesting to see companies convert profit through layoffs with risky AI-i…
ytc_Ugy6ePsOC…
G
Loss of jobs, no income for buying goods, AI company looses customers and
looses…
ytc_UgxD3ejo8…
G
They blamed rock music
They blamed dungeons and dragons
They blamed violent movi…
ytc_UgwfQu7tE…
G
Honestly, some of the things AI have responded to me with... I see how they can …
ytc_UgxMCWqSs…
G
If musk admit that AI is very dangerous they why he is using AI everywhere ??…
ytc_UgyS7tJzj…
Comment
Yup these systems are more effective when deployed together, but that to me is kind of the scary part. Only one of them has to have a vulnerability to compromise the whole unit. Besides the risk of one calculating something wrong and sharing that to the whole unit and for example then having all drones designating civilian targets as military. Personally i find it so weird to leave warfare to autonomous machines, like something out of sci-fi novel where wars are being fought between machines.
youtube
2026-03-15T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxs6Zorg8yD_dxJFNV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGWcrHgShLkDX5qFF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwH-VwuUdxoe-d1BJN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyd8QnCenaREuldmRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-9vNpgxhegmYFu314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJLFrjGKu6baV1E-J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgWGmUGOlKxFr4zsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMQSsr24AUY_vxLUt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz48H9rIbPUpUN6K9h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzeu8--kCX5Ga88HCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]