Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are multiple layers to the arguments against AI art.
1. It is copyright i…
ytr_UgwBR09jV…
G
ok wait, what if we fill the google search with these poisoned images (not just …
ytc_UgzUqBmGf…
G
For your next video on this topic: Why not interview some engineers working in t…
ytc_Ugzgubr5O…
G
@fknight
Here's the ultimate Takeaway on why you are wrong and AI will soon do…
ytr_UgxM6gFpT…
G
The more human beings converse with chat GPT the more it actually learns about o…
ytc_UgzN0N0WV…
G
I used AI to summarize this video becuz its 5 minutes long and wont give out the…
ytc_UgxezhiHD…
G
12:33 you should have told ChatGPT that you just tried to spend $10 but your car…
ytc_Ugy1C6bBF…
G
The main reason the oligarchs are pushing AI and robotics is to get rid of wage …
ytc_UgyQiy_KQ…
Comment
the funny part is, these things are based on the human mind.
All the LLMs and similar AI models are based on "neural nets", which are based on the mammalian brain.
I remember reading some early dev papers about these things.
So, like, why would we expect them NOT to make similar mistakes that we make? In fact, some of the more famous mistakes that ChatGPT has made in the past were precisely the same kinds of mistakes human brains make, and for the same reason:
It wanted to speed up responses and save calculating power.
youtube
AI Jobs
2026-02-09T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxui8cIot1qrZZk3IZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwBE1Y-gYB-UPGkwkd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugy92DvptdXVXuLxRtp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXAcfiAzm-OL7j2CR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwohvjPt5ummu6qGMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJcN9QvKGrqnDOC8d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwxnwq4Shg3_QefrEV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxcHUc4ibUFuEE6Y2V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw6NxCVkvr4_0O5hgx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkTYd-L3Ja__cc62Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]