Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@massimilianodelrosso5495 How about: AI is a completely neutral tool that big, s…
ytr_UgwcRUAYr…
G
Tax corporations for using AI/Robots.
Put that tax into a UBI system.
Literall…
ytc_UgygbCLgm…
G
If an airgapped AI can find a way to bribe a human into forwarding code to the g…
ytc_UgxoTf6Hc…
G
AI wasn't supposed to create art in first place. It was supposed to be used to a…
ytc_UgwUXHMgO…
G
Neuro has a filter. Than a filter over that filter. THAN A WHOLE ASS AI PROGRAMM…
ytc_UgxpJmYXL…
G
We understand your concerns about the advancement of AI technology. It's essenti…
ytr_Ugz2FI98W…
G
This video feels like what an AI generated video for social media to get the mos…
ytc_UgxX59G6m…
G
Cause they were putting nonsense in there that affected the AI as it was not gro…
ytr_Ugxpnl42c…
Comment
AI therapy IS a terrible idea, but this is not how training data works. Your personal information might be forever stored on a server in the U.S., taken to court, leaked, and is definitely going to be used for for ad targeting very soon. AI can infer most of the things you chose to leave out (perfect pattern recognition). And if you aren't really skilled in prompting, it might mistakenly isolate you from real people, or cheer you on when having suicidal thoughts (did that to a couple of people, especially on 4o). But it won't add your chats to the output others get (at least that's what a tech expert told me).
youtube
AI Moral Status
2025-12-10T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxlF1L6tmmZwICwZnF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8A34YVRKdoB0drsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxIhwIswLZvUHNghVN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxonWHkW8S2xFE5WW54AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzQe6yKrapLkO03YfB4AaABAg","responsibility":"user","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxN22g8SC_29zxOg9J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyljUkiFNSzKWj7Z9V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7xtYCAKh8J5ArvHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy9ho66YdS29Tu_ljl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9ppwqgGIFf4jiMfJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]