Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OpenAI needs an Internet connection because it requires too much processing to b…
rdc_m9i1cc2
G
Who says that it was a human that created a better intelligence? Maybe a human c…
ytr_Ugz9rdXWW…
G
People don't realize that AI is mostly a gimmick and degrades overtime. It's not…
ytc_UgxDegIH4…
G
PAY ATTENTION HOW THEY SHOWED A WORKER AT HER DESK WITH A UNICORN UNDERNEATH HER…
ytr_Ugy33vV7i…
G
I remember Andrew Yang was running for NYC Mayor, I thought he was a very bright…
ytc_Ugy-9oymb…
G
AI may not replace most engineers today, but it is certainly on that path. Recen…
ytc_Ugy1rb2H6…
G
لقد استقالت و لم تدع لهم شرف طردها .
يصعب جدا على العبيد تقبل الحقيقة .…
ytr_Ugw9NuXDZ…
G
I've been using delve for the last 15 years, and now I'm needing to find replace…
ytc_UgzESkaum…
Comment
I understand people’s anxiety cause this is new and we can’t control it but I really don’t think this is going to be bad like we worry it will be. AI isn’t and never will be human. What is the motivation to do bad things? I don’t think that exists. AI “wants to give the correct answer. I think it’s far more likely that when people try to get AI to lie or do wrong “bad” things it just won’t cause it knows that it’s incorrect. I don’t need it to care.
youtube
AI Moral Status
2026-02-14T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDNvBt1RU1jLzrODd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1of0XxWW4F7u2CCF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwYFUHR-qCbaVRtRsJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZLx4xtalhf6Frrad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgylJK7D6NYyjm6_Zj54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNG5bdZbi3Q_NFcrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0wEe9faydo4-wh6R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwNzkks9jFu5Hka_1x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxP6zaYhHh9fPK_hVd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEk3S4weaUjWW1zMB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]