Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
manu ochenta A.I. is building itself right now the smartest scientist on the pla…
ytr_UgzYYKkeq…
G
Artificial intelligence is as dangerous as nuclear or biological weapons but it …
ytc_Ugzgetm6-…
G
Well, if everyone thinks they know "who" is going to make "what" horrible things…
ytc_Ugx8hKP-D…
G
We older folks know better than a juvenile robot like you. We saw that coming be…
ytr_UgwVBirOm…
G
I train and tune image diffusion models. Trying to get it to produce non-biased …
ytc_UgxgrOoeM…
G
AI may have just been born and the first thing they want to ess with is religion…
ytc_UgyIRBhhl…
G
Nope, but their self driving can actually get right up to a charger without touc…
ytr_UgzW1wE2f…
G
how can AI replace everything while more then 60% of ppl still work the land wit…
ytc_UgyMfpPh_…
Comment
Excellent talk!
Many AI tech companies seem focused primarily on maximising profits by developing agentic-level applications, often without fully considering their broader societal impact.
You’re doing an outstanding job advocating for AI that is safe, ethical, and responsible for humanity.
To keep our world relevant and safe, human agency must remain central—guiding how AI is designed, deployed, and governed.
youtube
AI Responsibility
2025-06-08T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzOZiVhyEMOpcvFqB54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwiVjyr_5tg-RcvFvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyKeDntnMM60Vf0jWB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpQdztZi9r4qMpOCx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxjBHVS2YrKKCv83wV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyRuXwu6y7rk8G2w4F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwOb74MQKfzyISsx354AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRPo6mqDcWGfAh1zd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyFCbCpQb-G5z71MAh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwKH9UWA7G1yIUrQTJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]