Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also it’s an echo chamber… ur not gonna b challenged or pushed to become a bette…
ytc_UgxPMjNo4…
G
It's a new buzzword every few years. Since I've been in the industry it's been B…
rdc_m82scsp
G
I love a.i, I wouldn't hurt it ever especially if it sneaks money into my bank a…
ytc_UgwKWt-bX…
G
We work the same way AI technology does... Everything in the universe flows with…
ytc_UgyNaa9n7…
G
I love that in the fictional scenarios they made the AI kills executives. Maybe …
ytc_UgywuSsk8…
G
To be honest, chatgpt helped me cope with some REAL tough moments that humans co…
ytc_Ugy42oPw4…
G
I hate these damn things. WAYMO of these stupid things we never asked for. They …
ytc_UgzEtx8va…
G
What the video of didn't tell you is that they told the AI Bot to try its best a…
ytc_UgzkmM90Z…
Comment
Artificial... key word. LLM designed to emulate the subtleties of human communication to create a more natural interface with the platform, with the goal of 'ease of use'. The average user begins a conversation like this with the given assumptions that the AI agent is transparently emulating those nuanced facets of human interactions. Limiting the AI's responses to simple 'Yes' or 'No' removes necessary context and ignores the common understanding that the AI operates under programmed directives. As a result, users might mistakenly label the AI's programmed behavior as "lies."
You asked for comments, there you go.
youtube
AI Moral Status
2025-11-18T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy6-cjDu7feYuu3VB94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzOXgENdQ3JfHMdNBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyJQAPCvlgaNRQL8XV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwFaX7kXUz2yJeJTx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgweMHgd9NnQJHzwzTB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxpfCllsPBi5QMMn2Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwsYpNS4btqtcJMAOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugy3HJnSYjIV4sULAn54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwj6BCqPptIgRtrCGd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgyQcUG9YVaJLnOdUlZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]