Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You’re going to get downvoted but you’re correct. This sub is people who can’t g…
rdc_o4hlp7n
G
@flipflip143 I am guessing that your clients could use AI to design the garden, …
ytr_UgyNpksF_…
G
Yea im calling bs. Atleast give references. Posting out of context clips sounds …
ytc_UgwQ4UUdJ…
G
@Creslin321 You'd have to be real special to not recognize how AI is going to co…
ytr_Ugz3T1Tjk…
G
I was served a short with someone in the drivers seat of a Tesla on the highway …
ytc_Ugx8l0hg9…
G
Spiderman: Into the Spiderverse, Spiderman: Across the Spiderverse, Kung Fu Pand…
ytr_UgxF_XAuY…
G
OMG, his opinions are like AI. We as a people form a government, elect members o…
ytc_Ugz_dxZbo…
G
Hear me out:
The shortest path ain't the fastest path.
Explanation:
If all 69k …
ytc_Ugygipym0…
Comment
People claiming those things don’t understand what human is and what AI is. AI is just a machine programmed by a human, that is all. If it does harm or something bad, it is how it is programed like an advanced computer. Seriously, fear mongering to get more likes. You don’t bring any specific information, any context, any extra information, that would explain what is happening and why? Just cherry picking some peoples’ speculations. Yes AI is very imperfect, and that is why at least in near future it will never do what was claimed. People who overestimate AI are usually people having zero education in neuroscience and overall in medicine/healthcare.
youtube
AI Harm Incident
2025-09-17T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx9dVzoar0DyEfIWd14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1VwgTSq00MuzI_UB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVhMEkTWfkAx74yIF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvqUyrbhgrnbJaXo94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9mZDHs9LCDDmlCzt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5IMmFZqT78D6ooQB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxylCa5-TovVK7859B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDx3-DaNY48STlv_54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTPkTsyQKtcvt41KN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGpz6AwbtIEzcpaUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]