Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ahhh, super intelligence ehhh. Isn't there warnings about this in the word of Go…
ytc_Ugxlj0UwK…
G
The "Human Art Club" poster is just perfect~
I would like to add on the tarpits…
ytc_UgyQnusjX…
G
interestingly, back in the day, digital artists were treated the exact same way …
ytc_UgxujJzVF…
G
when it comes down to the plagiarism/stealing, the theft argument is insanely we…
ytc_Ugyw5JdIR…
G
If they paid us (AI trainers) a whole lot more, say, an actual fucking living wa…
ytc_UgzYn-TFd…
G
I question what these AI nerds mean by productive or “real jobs” what are consid…
ytc_UgwBJ_fJw…
G
You are wrong, it's not that AI doesn't have art, it has ALL of the art.…
ytc_Ugw9D0RmK…
G
These tech-nerk mfkers give me a cosmic case of the red ass. I am 70 yr old ret…
ytc_UgyQaAtuJ…
Comment
While I agree that people on the Internet don't admit lack of knowledge enough… There is enough IDK in the full training dataset used for pretraining to be generalized. However, it is being tuned out by instruct-training step where IDKs are manually curated out. Now, I agree with the statement that we can rid LLMs of hallucinations at the cost of their utility. If the model was never exposed to the instances of IDK as a viable option — it will try its very best to answer every query, but hallucinate if it's out of its depth. Now, if you do expose it to IDKs, it is likely that IDKs will be generalized as an unconditionally valid completion for various queries that are in fact rather within its depth, thus falsely claiming lack of knowledge. I won't go into probabilistics, but you can surmise why the false claims of the latter would naturally occur more often than the false claims of the former.
youtube
AI Moral Status
2026-02-06T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw65ddg8VmmnUwyhZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwrwf5djvzf1A4SzoJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxU93Imm0fGEf2_EG94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCY8IUcgLecOVXmvt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxte5dilGNP9aCBjLt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6k6Zxf9fxHuBflVt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyGGsy-Cc7bvs5zlWR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWPwevF96UMECGT_h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBRQ_LaqDIqjkmBAN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxk695FWI9EapL4Lb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]