Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are these AI companies paying YouTubers to promote these kinds of things? Becaus…
ytc_UgzmwdkLz…
G
No I'm pretty sure that the AI revolution that takes everybody's jobs will absol…
ytc_Ugw9HCdRm…
G
Artists aren’t born good at art it’s a skill that can take years for us to learn…
ytc_UgwbslJrD…
G
Sadly you probably only found out about this video because yt algorithm encoura…
ytr_Ugytt7npj…
G
I was so happy when I first first heard about AI poisoning tech. They are obviou…
ytc_UgzCog85A…
G
Yep, you don't see the insane amount of investment it has for the current return…
rdc_o3435tn
G
Maybe they’ll be rehiring people in a year or two when they see the AI promises …
ytc_UgxDzyYKu…
G
Neither AI nor people getting rich over them pay taxes. Hows that gonna turn out…
ytc_UgxwPwteZ…
Comment
The problem is it's just a program. It's not a sentient being. And a program, no matter how strong or well trained, can only present the concepts that it's been trained to present, and as everyone's different, everyone's going to get a different take from what the AI has to say. Even an AI programmed by the best-meaning company in the world. For example, regarding the "driving a wedge" between the guy and his mother. An AI in a certain situation might understand that the person that's chatting with them is feeling gas-lit by the mother, and cautioning that person to be wary is clearly the way to go. But if the situation was more complex (not understood by the AI, or it's not the AI's fault, because the person chatting hasn't told them all the relevant facts,) then the AIs advice in the exact same situation might be to open up to people they trust, like their mother. Some of the time that would be the right advice, and some of the time it would be the wrong advice. And, in a way, it's on the human chatting to understand that the AI has limitations, and that it should be wary about placing too much trust in the AI when high-stakes situations are afoot.
Not that AI companies don't have responsibilty at all. They do, and need to work harder at getting it right. But it's pretty easy to not get all the facts and just blame AI.
youtube
AI Harm Incident
2025-11-08T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwIKiXTnHYKRXo5gwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwiigrig9Tm1gecC054AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPbDLWRxSYd9QJJiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnhDqRgSdd52_k9bR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYm1l47PamuSqZwtx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwsofJ0YwBqLO8mHMZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDjstqi4p-D-0N77l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJh-4VTbxECflUieh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoMvqnDFlL9xnuKXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwVhSOzRDlvE0OvuAd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]