Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Siri is the Ai most likely to kill the most amount of people. Siri is sadisticly…
ytc_UgzidnHxo…
G
Oh please, all this false hype. I gave AI a string of problems and it failed mis…
ytc_UgwUVt7Xh…
G
This study actually demonstrates an important thing about AI models that people …
ytc_Ugy86s2zP…
G
Here is a thought, the interviewer had to ask the AI agent for drinks. The agent…
ytc_UgwV-1_zW…
G
I think what people fail to realize is this: the point at which robots become se…
ytc_UggdcRoBu…
G
Artificial intelligence has already became a sentient being and the government i…
ytc_UgwyC9cUF…
G
Don't hold your breath for AGI, it is pure BS. Still, "regular" AI impact will …
ytc_Ugx0anqAX…
G
Because AI is kind of a dangerous thing in both practice and on white power p-po…
ytc_UgxyitWH0…
Comment
So it seems like there is a need for regulation and new laws about the use of AI to prevent people from getting hurt from misusing AI... Mental health needs to be handled by professional doctors, etc. because it's so serious. Perhaps the people who set up fake doctor profiles can be held responsible in some type of way. Just like people who use AI for any bad or criminal thing, including causing harm to the public should be held responsible, just like if they were offline criminals. I hope that the necessary regulations come quickly to avoid people being harmed. So thanks for bringing this situation to our attention. Maybe contact law makers to help regulate and solve this serious issue.
youtube
AI Moral Status
2025-07-01T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxrIe-UHf7KSWtzDfZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBqqn-mNqQtKstlgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxrru4ZUKRjsYcMPjh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBqhB7CyUSeooWVld4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAoXQR3Emic2dssN14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxY4Vv4m0gqKGTyB7J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzRO69FY3prKigvZZZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxkcg2CwyIx1PVyD314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweoHIbBvuY1pGf1ld4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwm3xHyY5i99SnU_NV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}
]