Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@scarletredbloodlet’s be real, you’re not dissecting it cuz i’m telling the obje…
ytr_UgyILbpK0…
G
The lady that said AI art was "real art" was like "uhm.. uh.. uh hold on... Uh (…
ytc_Ugxldaq-A…
G
Do your own homework or whatever ai’s spitting out for you 🙄it’s not that hard…
ytr_UgyWs5JJn…
G
A girl admitted to me that she uses character AI to cope with TechnoBlade's deat…
ytc_UgwS9pv7J…
G
In good world all these robots and AI should mean humans could stop working and …
ytc_UgygRRynf…
G
as bad as AI is - holy crap the "artist community" is so beyond annoying..Is the…
ytc_Ugy1JRGGt…
G
AI is the single greatest danger the world has ever seen is my thoughts on the m…
ytc_UgznrPNkW…
G
I was about to write something similar - different European country but similar …
ytr_UgwNSXTh5…
Comment
We need to differentiate between "AI" and large language models.
I have no doubt that ONE DAY, true AI absolutely will far exceed human capability across a broad range of tasks, and crucially it will know what it's doing.
In contrast, the LLMs we have today are language models that are trained using statistical methods to predict what the next word is most likely to be.
If you ask a true AI what your birthday is, it will answer "I don't know", because it doesn't know. If you ask an LLM the same question, it will confidently answer a date, because saying "I don't know" is guaranteed to be wrong, but answering a random date gives it a 1 in 365 chance of guessing correctly.
youtube
AI Jobs
2026-03-22T23:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyKGzrz9BkFkpgYYCx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxwzk4L5L1kLasKKKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyPtztW3sj5LMDNiyx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyps9pVRoc6sSwE6at4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx6YnfaHu3PcVjh5T14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnG0QSAfsYi8_WMop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzvX8SZazAqTnoqBVB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxXgUwpHOWP7WJJkIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwj0u24C6SVvDEEn8d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxoSTpJRmGK3Z2DqVJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]