Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>I’d rather see the topic of mainstream discourse switch to this from abortio…
rdc_jmflqvz
G
You have 1000% right. If this AI is so divine then it can make its own dataset! …
ytc_UgyAB_XKt…
G
The scary part about AI isn’t that it makes people lazy, it’s that it can quietl…
ytc_UgwoeyALO…
G
yea no. hes not being accused of being a commie bc he wants to make buses free. …
ytc_UgzGjKOst…
G
So you're blaming the AI? It isn't even the AI fault but the humans uses them. Y…
ytr_UgxIuLYHS…
G
I was paper… flat one demential.😅 You are scissors!? “Cut me Out” sounds like t…
ytc_UgzsyJyj2…
G
I don’t know why but I always had a fear that AI will only grow larger and artis…
ytc_UgyjuaQ-E…
G
I am really hopeless with all these Ai and it's video, just suicide seens the ri…
ytc_UgyqsksUp…
Comment
collaborates actively to a potentially dangerous technology developpment, within an organisation whose purposes are the exact cause of such dangerous use...grabs enough money to retire, leaves and white knights... One can argue 'good faith' maybe around 2018/2020 but since then, the consequences of AI developpement as clear enough, especially when you are working within research, to forrbid the excuse of naiveness.
Several threats indeed, population control through data and behavioural exploitation, economic reshaping through destruction of jobs (in economies htat are based upon labour exploitation...), financial turmoil with the shift in profitability.
youtube
2026-02-14T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxtYf4heZ7Pck0hTmx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx6_GwA8niYcr5UQvZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzvzgAK4z2FrdMpw2V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw09uTHjo5ps-npDKt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZZyE4skSXuRhPJQZ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzFuEfs75p-59vQam14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx9ND4l_oVWVB--fJF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwEKyT-ZSmVode1PV14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzKOHIQr1dFs9mmdYZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwOgJ8zPr2FchdeQOt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]