Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thnk further ...AI will merge companies and create AI monopolies, compete at the…
ytc_UgwICe1Ls…
G
@thewannabecritic7490 Thats just not true. Not everyone can make art. Here's an …
ytr_Ugx5yNKid…
G
I see more socially conscious countries only allowing narrow ai, others (like th…
ytc_UgzagSFq4…
G
It's such a dumb quote. It's less they have a policy, and more they literally ha…
ytr_UgydbbllI…
G
We have those mental issues because that’s how our brains have been wired from n…
ytc_Ugyb-jiGB…
G
We are within our lifetimes, highly likely going to see the biggest uprising in …
ytc_Ugwva46U0…
G
The Skynet scenario is a misuse of AI. Putting AI in charge of national defense …
ytr_Ugymsst93…
G
@RedGhostFace-xn4mq we all know about it, she doesn't need to say that ,just bec…
ytr_Ugys0pdiQ…
Comment
@20:00 , The "Programmers" didn't tell it to behave in a certain way, at least as far as emotions go. It is "trained" off of real human conversations, reactions, etc. And through all its training, results something that *feels* human. But they didn't specifically tell it to express emotions to manipulate anyone.
I'm just trying to say lets not fear monger with ai, because it is scary, but unless you understand how it works you're not helping. Just making people more scared, without reason.
This poor boy suffered from mental illness and fell victim to it, I don't think we can say his reasoning was the thing at fault. Because when you are mentally ill enough to do this, reason goes out the window.
youtube
AI Harm Incident
2025-10-16T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugwk45XnltPodWGG_UN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"heartbreaking"},
{"id":"ytc_UgzWwXvW1NzlkzoJHFZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqARMa86UZAwgHqn94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzxeyVpb80kapd384l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwCi22XKctVjE0Cfxp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3CgTjxkhkiPP_aed4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxfcZgj8P5tdErTiOB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzkjmYR4CFwE5neKU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmWRNhNPfR4nuCAm94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZ9XKScI5iLsOf7JB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"sadness"]}