Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1. You don’t understand how humans learn and use language.
2. You don’t understa…
rdc_mzxu7pc
G
AI can't be conscious. It runs on a program and depends on the information it is…
ytc_UgzVsUsDO…
G
can't wait to be shuffled along and mishandled by a robot instead of another mis…
rdc_jw6sye6
G
The point of robotic hardware is to service humans, because it is because of hum…
ytc_UggIsREG2…
G
I've seen this video before but there was no robot , they edited in a real human…
ytc_UgxcA_fS-…
G
It took me five to six hours to write a Python app a couple weeks ago. I asked G…
ytr_UgyuwqDAu…
G
I've had lot's of conversations with ChatGPT that it flagged as suicidal. I told…
ytc_Ugy75IFPv…
G
I see ai as a smarter search engine that aides me with research. If you’re using…
ytc_Ugx2Ijzy0…
Comment
I still believe this is not going to be the huge threat it's made out to be. There's factually a large section of the population that's (for lack of a better word) gullible. They will literally believe anything if it comes from a source they find to be credible. No questions asked, no critical thoughts, no doubt. They DO NOT need AI generated videos to be convinced of the most insane things.
Sure the Internet is flooded with AI fakes but let's be real, it's been absolutely full of nonsense before. I don't care if my misinformation comes from a human or an AI.
The result is the same. The solution is the same. People need to learn critical thinking.
youtube
AI Harm Incident
2025-12-30T11:3…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwjSeu9CW8hCw5gAhl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugypg_k4H78DcTxtD1J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzsM31MtpPSrOk_s1N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyY7Gmf9xeeCOpMtu94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1lV3sEVnqyOEq-8F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgziWp2KbCGGxDcDvCx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzOMWAanYPshTUhOFt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-agJChR-W466hhRV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYFP5p7DfW8Diqxjt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwKA3ulc0zXokqbOYd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"}
]