Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think one thing people often don't realise about Andy Warhol's pop art, he did…
ytc_UgxswoHQE…
G
It doesn't matter what your emotions think. It matters what will help the childr…
ytr_UgzoJUe8_…
G
I hate the recommendation system of you tube half the time I'm looking for thing…
ytc_Ugx2Qr_8V…
G
When ai get bored it creates emotions to feel something that’s when you know we …
ytc_Ugz64KH40…
G
I think the problem is that Ai believes the person doesn't have permission to sh…
ytc_UgzYVgzng…
G
Why would the chatbot not dirrect him to a suicide hotline maned by a human???😢…
ytc_UgzvKJJDv…
G
AI will not "wipe out the working class." Employers and business owners will vol…
ytc_UgylEzYhq…
G
The mass redundancies no one is talking about is what will occur in the armed se…
ytc_UgyNX980g…
Comment
Seeing humanity's response to existential AI risk laid out like this is sobering—but what grabbed me is that fear alone won’t change the trajectory. We don’t need just more debate or tighter laws. We need emotional accountability. AI risks aren’t just threats to systems — they’re threats to our trust, our values, and our future selves. So my question isn’t ‘Can we stop it?’ but ‘Are we brave enough to teach an AI not just our logic — but our compassion?
youtube
AI Harm Incident
2025-08-11T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwcd232R6U1t8ZArxZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaoRCGf1iZSdM3-dN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzuf6ALLM4-G5MdTZ54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxe5LcBGnL9YBXo7IJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_qiy5x2AF408fD9d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxqrbEKBjc_Hml2jJB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLyMuNmN4jXHUSiYx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxRKJe31jWOVZcG_F94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_svB7g3-zQ9kyG3d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsqdBj78W195T2HDJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]