Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's one thing to play around with A.I while crediting the source but, people ar…
ytc_UgzA4u-AJ…
G
The people who religiosly defend using AI for creative tasks are the same people…
ytc_UgzFkBxg_…
G
If anyone wants to see a possibility of full automation with no jobs for people.…
ytc_UgwEfVEYR…
G
I was on the fence about self driving cars but your arguments convinced me that …
ytc_Ugxrvn56y…
G
Maybe the biggest issue here is these AI music tools aren't granular enough? May…
ytc_Ugx9xLOLf…
G
you're actively not supporting artists as an artist yourself by using the automa…
ytr_UgzbjeeMI…
G
Most people who dislike ai aren't using wall e argument, the people who support …
ytc_UgxfKwPgy…
G
The worst part is that AI art is not magically originated. It is stolen bit by b…
ytc_UgzNxjdK3…
Comment
Many ML algorithms are black boxes. Even the developers may not fully understand what features the algorithm is using to label a target as legitimate. The idea that a military commander will be able to look at an AI-generated target list and be able to pick out the algorithm good calls from bad calls based on their experience demonstrates a profound lack of understanding for how they are trained and operate.
youtube
2025-02-01T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyz8LKw2HwM5v870kV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNZkpnlHywoCPOsLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwvBUMeP8fX2d5ZGSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIoY0fBVsT6S0nm2B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxsPbLTKk_2nSQM5lB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8Aag95PwXCWD8qS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz51lCHbNxgTorYh2x4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz01A2y_VHJGSqbA4t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyb5vaK-ovzXxWgMU54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzG2N--D94wRjM5kht4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]