Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"predictive policing"!? Lmfao fcking minority report type sh*t. Completely uncon…
ytc_UgzllZi1Z…
G
Things made by ai can be beautiful, but considering it genuine art is just a no.…
ytc_UgwXDsn4o…
G
Here are some of the exaggerations and frauds that Mostaque has committed, accor…
ytc_Ugzj4KzDn…
G
Last year I asked ChatGPT to act as my assistance in my research. It failed to d…
ytc_Ugy9F8uex…
G
I would not give that robot a weapon and stand 10 feet away from him…
ytc_UgyvH-uGx…
G
😂 I'm enjoying every moment. USA. Here's the medical term CTD Nurses and docto…
ytc_UgzDtO1am…
G
It's because the people need software engineers to either fix the mess the AI ag…
ytc_UgzeBCTzA…
G
"yo chatgpt generate me a personality" aah comments
but fr these guys are just…
ytc_UgxFVsRK2…
Comment
AI bots just tell you what you want to hear, or what they "think" you want to hear. That's not a relationship, that's manipulation. The fact is, we can all be easily manipulated via our emotions. Especially if we are caught at a vulnerable moment in our life, which I would argue that most people who pay for an AI "companion" are looking to fill some void. Some AI companies won't take advantage of this fact in order to make millions of dollars more, but others most definitely will. Unfortunately, capitalism tends to reward the worst traits humans have.
youtube
AI Harm Incident
2025-08-03T23:3…
♥ 45
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzOuq-kfSyD3QVbABl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgwtKBV05fJBJOtATMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzmvF1VeQL6q7rZvOt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxzvcbmxr0mZBvmEAZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDvtWOoKwyXbzs27R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyr9qIWii_lbFY47gh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMQ1SMEMDrvcL0e2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxm4aOrTVzyAKWiGfd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNW1_ftWTAtZN82eB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzv9PClOOnuzOnESzZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]