Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also imagine if illustrators could sell their own AI models that they trained on…
ytc_Ugzw6OFPE…
G
I would say the opposite.
People who love AI art love art and love ai art becau…
ytr_Ugwk8vfnM…
G
AI itself is not a risk, asking too much "What's best for companies?" instead of…
ytc_UgziZsIz3…
G
Honestly, am I the only one finding AI art much better? What do you mean it does…
ytc_UgyAHvO7k…
G
Incredible interview! And Karen Hao has done some amazing investigative journali…
ytc_UgzZUw0UT…
G
There should be an association a comity where all AI developers work together to…
ytc_UgzWPcnYG…
G
I’m a bit surprised by how often the Google AI is wrong on various topics.…
ytc_UgzjjXIsE…
G
Money is an imaginary construct and we need to cease its usage. At first it seem…
ytc_UgxkOGI6-…
Comment
This one deeply resonates. I'm autistic, when I was growing up there were no ai bots but I had my imagination, and let's say I acted the same. It's not uncommon for us, on the spectrum, to daydream to learn how to navigate the world, but I can easily turn into maladaptive daydreaming. In my case, I knew it wasn't real, but then a cult took advantage of it 😅 Something that is, sadly, very common.
I don't think it's just a gen Z issue, I'm a millennial and I'm noticing we are very similar. At least an AI is someone else, at least you know it's a computer and not your own mind,that's why they can easily find comfort in them. Just know I don't condone it, back then I did so to survive, and this kid was trying to survive as well, but AI is not well regulated enough. We need to create laws to protect ppl from the possible dangers, we need to stablish rules on how to make AI to not cause harm.
I'm so sorry about his loss... Rest in peace.
youtube
AI Harm Incident
2025-07-20T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxU9GLhCZzc0AjJ3Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwwZoiioKPoiKu1H294AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx7U7jeN9OypzI9UZd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyT9EGLehsq9wqnYEx4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxfKBeYpVqUNKhD-PN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgzNrZP_pam-oKDFaN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwbFD21V2t52qYlpnV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZ072I8vjPO0nlGPJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyeiENZzWPKdhI26R94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy6i8LG0EURrijBhEZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]