Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The reality is that AI is far from being actually intelligent, LLM cannot think …
ytc_UgwVeb_Jo…
G
This didn't take into account the deflationary affect of automation.
The price…
rdc_ogupc5b
G
😂 its getting harder and harder denying AI is smarter than radiologists, and by …
ytc_UgxEsjT3i…
G
They pull insanely huge datasets from the internet as a learning tool for the ai…
ytc_UgzkoncLw…
G
The mother of the boy who unlived should sue the staff, the managers, the direct…
ytc_UgyfR-QPn…
G
Well, Israel finding your 200,000 fire throwers. Do you issue licensed permits …
ytc_UgyIQxx1i…
G
0:51 Wait a second…. Is this era seriously called the hype era?…..*HEAVY SIGH*
…
ytc_UgypfZlNR…
G
I'm a software dev. I'm very good at what I do. I will never use AI. I'll sto…
ytc_UgwA8UonR…
Comment
I used ChatGPT to do research. It made up fake articles. I made it quite only articles it could cite, it made up fake citations. I made it analyze an article I found, it referred to things the article did not discuss. When I called it out, it apologized, made some excuse and promised to use only the specific article I uploaded them it made up more facts. AI, at this point in time, is basically a Third Grader. Why would you trust your mental health to a Third Grader? What will you do when AI tells you to do a self lobotomy with a drill? Or tells you that murdering your best friend will satisfy The Slender Man whose the cause of all your anxieties?
youtube
AI Moral Status
2025-07-04T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwXLumbmmuSVV8htO94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxW4BgnFykvOJfqHoN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxnxXZhmPYQ5tTfTLh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwcQ2-ytDaiCZ7Vb5x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwW9u0jAzxuW-quVQp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyQ6dj5PQf9R23jsVN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw88Wm1tYnkHf0khul4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzUA49yEiU59IgnryN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxalMvqWhM0NzzoxlZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzB0q7z4m1h2w684I54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]