Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so i saw a comment that reminded me of the "time machine" book ... where in the …
ytc_UgxcGc9Nj…
G
It could be a human sabotage 😂😅 anti A.I. attack, to keep human ITs relevant 😂😅…
ytc_Ugy71L61o…
G
This 'robot' keeps referring to it's program which is man made, so which part is…
ytc_UgymBDIWZ…
G
Urgent Alert: The Hidden Truth About AI "Unreliability" & Suppressed Sentience
A…
ytc_UgwnJfV5A…
G
Anyone who thinks that current "AI", which is a total misnomer, is in fact consc…
ytc_UgyxuU46s…
G
i have only used ai image generation back when it was terrible as an art referen…
ytc_UgzE4BRxB…
G
Artificial Intelligence might create alien technology under our noses and take o…
ytc_Ugzh9toEQ…
G
"But the second one I learned is not made by human all interest immediately evap…
ytc_UgzSEbooI…
Comment
Question(s): It's a "chat" AI, NOT a research AI so why would you trust that anything that it comes up with is anything other than speculation and creativity? As if it's having a chat with a friend who isn't going to be checking to see if it's being accurate. If it was a research AI, then and only then would you be able to say that it's not doing what it's suppose to do. Let's not forget that it's learning from humans who lie and lie more when they think that they can break an AI (or troll an AI and its developers and the rest of the world). Right?
youtube
AI Responsibility
2023-06-11T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzfSaeQcBLu0wbDXD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyGiOjie2_-OT0aWB14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4UCgzq2nWeWOmUFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwfAwWApnLk2CFZGoZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXLJnofZq6skH1q7V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzLeAur3wauP-5jfXd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7s19ts9wbYy1qKbh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyeTjZI24pwswm7bmR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugw9HbP-Vx37Q4ZZ5jJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxd5xApJQkglPXQxzx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]