Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What most people miss is the fact that current AI isn't that nigh-perfect, consc…
ytc_Ugykv7Fnm…
G
@ProfessorDaveExplains I think it’s possible the CEO, of whatever AI company tha…
ytr_Ugz5r-GSL…
G
Are we sure they were AI? It could be they were real so he felt the need to end …
ytc_UgwTlEZxM…
G
Doesn’t more or better AI give us the ability to produce more or better as oppos…
ytc_UgwqPr6xy…
G
It’s only based on information the AI has access to .
The creators can block or …
ytc_UgxGKQVKZ…
G
You could argue this comes off as pretentious nonense, but every single AI-gener…
ytc_Ugw6uQcZi…
G
All of this is based on the Internet. Either we shut down the Internet, or we ke…
ytc_UgxXeOrI6…
G
when i first learned about ai i thought it was so cool and that it was such a us…
ytc_Ugy2iBV-5…
Comment
Gary Marcus has been right about AI all along & his point of view is finally being vindicated.
LLM’s are not the model we should build our society around.
LLM’s are essentially “predictive text” on steroids & those hallucinations (ERRORS) will never go away & only get worse over time.
The Billionaires will get richer and all we’ll get is more AI slop - The same AI slop which is used to train the next generation of AI until sometime pretty soon the internet will be useless for anything other than entertainment because you won’t be able to trust anything it says (it’s already starting to get like that)
youtube
AI Moral Status
2026-03-14T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyR5bBebEzuBWEXXNt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwwkdjXG-fKM00C7EZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwjf3S9RESN2ILuEuR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-_ebKHFiVK4TeUQ94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNgbGcmSudO7bdXNR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwmIzmbZGsTC1EqeRp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyR6ME54VjJyV8ZQpt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyyfRmvvaWFIALIoPt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyTUurOwvdDhkRpRQF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwcxviukzlMXn5YbDV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"}
]