Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fact that you guys actually think you can spot good AI is quite amusing. The…
ytc_UgyAD8OBV…
G
Artificial intelligence won't be reached within our lifetime so this won't be a …
ytc_UgijakQOO…
G
It seems like your comment might be a bit off-topic from the dialogue about wisd…
ytr_UgzyL4Z-x…
G
I love how after years of automation fucking over people from all across the lab…
ytc_UgwFtOPHj…
G
I lost faith in people not because of the use of Ai but the justification…
ytc_UgybTX0Om…
G
Yeah this is just stupid. AI is soulless because you aren't making it yourself. …
ytc_UgzKkknS-…
G
Please do a video on how to remove copilot from the new outlook. It will get a l…
ytc_UgySpraAn…
G
There are at least three lessons:
The first is AI will never replace the human b…
ytc_UgwngrD_W…
Comment
The topic at appx 17:07 reminds me of "The Tiffany Problem" faced by creative writers.
The basic principal is, people believe credible-sounding associations more than hard facts. For example, one can't use the name "Tiffany" in a historical fiction novel, because the population at large doesn't associate that name with any time period before the 1900's. It sounds "too modern" and breaks the reader's verisimilitude, despite the fact that the name "Tiffany" has been in use since the 12th century. ("Tiffany" is short for "Theophania" - the feminine version of "Theodore").
Anyway, it doesn't surprise me that AI prioritizes "sounding good" over being factually correct.
Fascinating stuff. Thanks, y'all!
youtube
AI Moral Status
2025-12-31T00:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwR0KTcJzfZClYUfUp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxmLnN8aUrKs8NdHmd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx3atiBF_UAseYEMyN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz8KnisJP2_V8gVV2Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmQ9qsKgncJ4oPhDB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz2rapkG0ziXD2ZObh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwn8qj6IYR7McEx7EJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzb-AhnURvnECZExOJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXTdJz7q3st2ci12t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuDifMoN7y0Md-jT14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}
]