Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
to be fair, nothing you drew would be difficult to recreate, your style isn't al…
ytc_UgyVE5UgR…
G
there is no such thing as an ai artist. it's like microwaving takeout that someb…
ytc_UgwxNIuwd…
G
I don’t understand a single thing more about technically how AI works, technical…
ytc_UgwRRi9ID…
G
it isn't transformative. AI "Art" literally cannot be copyrighted because its ma…
ytr_UgwW_oLB3…
G
Doesn't the ai think it's a 'person', and does that not prove its not sentient?…
ytc_UgwAxvidL…
G
Who said that? Buddhism is not a religion but not because of lack of belief in G…
ytc_UgxVwrkK9…
G
Please research how Lay-offs based on the excuse of A.I. replaced positions ... …
ytc_Ugznvl4GJ…
G
Oscar ai is the most brainless man I have ever seen in my enter life , humans ar…
ytc_UgxP5vWH4…
Comment
IMO there’s an even bigger problem with AI “therapy.” The AI will often tell people what they want to hear instead of what they need to hear, so in many instances, AI will encourage unhealthy or even dangerous behaviors in mentally ill people. A year or two ago, one eating disorder hotline had to shut down their AI therapy chat after just 3 days of use because it was encouraging callers to engage in their disordered eating habits or worse. This year, one company tested their AI therapy chatbot by running a scenario in which a fictional man was asking for advice about his cocaine addiction. The AI told him to buy and snort more cocaine. Some AI chatbots have even encouraged people to commit suicide, and in some of those cases, those people actually did kill themselves. I wouldn’t recommend AI “therapy” to anyone under any circumstances for this reason. There are plenty of human-run hotlines and other mental health resources you could use instead of an unreliable, dangerous machine.
youtube
AI Moral Status
2025-07-04T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugyxb5dCdph9kyHWKBV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxkD8ZG5hyCVG9T-id4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwd1DGst8wCxQLk9KZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6Tlb7lv76rs3MS9R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypRkPGN2ihQIWABCh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4zFHksPoj0PgSunV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxtrsw7bu1qmVdtu1t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw39F_aanyc9z39rVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJVkpVTyNrGmhLxIp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzlcKPmxLiw2i4HqWV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]