Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Emergent properties have been proposed in various ways to address the consciousn…
rdc_dd4ff62
G
Didn't Microsofts AI become unglued and started having misalignment problems whe…
ytc_UgzZotQVd…
G
This whole thing with real artist vs ai kinda reminds me of the Balled of John H…
ytc_UgyRJSUU5…
G
As an aspiring doctor I do not want to study for 15 years and end up in debt jus…
ytc_UgzzRbuLN…
G
As a programmer I can't see robots or Ai being harmful to humans. Programming is…
ytc_UgilP4I0e…
G
You're right, you just don't get that it's unethical lol
Like, if you were in a …
ytr_UgwGbCdGv…
G
I use AI several times a day for a multitude of tasks. It requires caution, but …
ytc_UgwnkJwzl…
G
It’s a very very good deepfake but u can still tell it’s fake especially around …
ytc_UgwhZbYOC…
Comment
Living with a person while they were getting their PhD in a science field, I learned (a) that professors have absolutely zero pedagogy training (and many professors have no interest in getting the training either), and (b) the way doctoral programs are structured incentivizes doctoral candidates to make shit up in order to finish their research and publish papers and incentivizes their advising professor and fellow candidates to keep mum about discrepancies they find with the reported data. There is a churn with higher education, to get people in and out, and most people involved are just trying to get to the next thing, even at the upper levels of learning like PhD programs. This was 5+ years ago, so before AI took over.
youtube
2025-07-31T19:5…
♥ 17
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz7o6pErtElIiQbm2J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwG8eoyUQOdnE2AwO14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxncthlgLVEuFvFVD54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxQvljNR8Vfa5DQnal4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypH83aE1F-B0dm_NZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyOkcIHkybdjGJ3PhR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwITCn9k0R3ebBB3GR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugyn39-s0aQ2BAfinXJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymA5SkoqA-4uj26pt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzfT9H3CV3qm_fiZtt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]