Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It was time 30 years ago as soon as we figured out how to automate the labor 😑…
ytc_Ugz5U1_AL…
G
We appreciate your feedback. Remember, the video showcases just a glimpse of wha…
ytr_Ugy_s7JPl…
G
like the guy above me said, IT IS NOT ART, it is just words that you typed that …
ytr_Ugx_JHOMn…
G
Your question at the beginning isn't about who's more likely to commit a violent…
ytc_Ugw9-MT_1…
G
Even in countries that have "decent" healthcare systems, talking to an AI will b…
ytr_UgyB1j7Ta…
G
Ok but the part that makes me genuinely wonder: did Disney think the hard part w…
rdc_ocpfe9y
G
i dont directly got issue with ai/LLMs, i got issue with the idiots that are sho…
ytc_UgyGF_PWu…
G
Only one way I can see AI being in art. Correcting the fill bucket mistake. You …
ytc_Ugzb7Kugf…
Comment
After watching this TED Talk, I realized I hadn’t thought much about AI’s real-world impact. She says AI is dangerous because it can harm people by spreading bias and hurting the environment. I agree partly, but I also think she underestimates how fast AI is growing. I recently saw my friend lose a freelance job to an AI tool. It made me worry about jobs and fairness. While I support using AI carefully, I still think the speed of change is scary and needs stronger rules. AI is useful, but we need to slow down and think more.
youtube
AI Responsibility
2025-04-29T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwcpcTEZ28DGRJBFvl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxH2pLh_sH3Gtpxnep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmVk309wfhKlpxZEN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFftdqM_pHPByH4wF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw-S0FthI9MHq2J_tl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzILN_LZbm-VF16S1d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyryKhuDb5C42rUikp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxY_J6iqHQCO_xN8b94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6kGXD5d8bFaavOc94AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzPLHfzVhfHVGyhgFt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]