Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is going to be the final straw that destroys human kind. Some sick evil non…
ytc_Ugz6TFtKd…
G
yeah this kinda thought is why im not really worried about it... ai art is soull…
ytr_UgyhNElvQ…
G
@Tiewaz Someone else being allowed to be present is very different to your "bo…
ytr_Ugzfql--W…
G
What does that have to do with the possible extinction of the human race and 50%…
rdc_ibewavo
G
Don’t worry Ai won’t replace making hammers which is what our current leader wan…
ytc_UgwQvIkez…
G
Someone like Arvind saying tbat PhDs are useful even now is prelexing 😁 while Pe…
ytc_Ugzly_OmO…
G
If you use AI, your art will never improve, my art started out as lumpy(because …
ytr_UgyJtDqiV…
G
AI is great...it can replace my boss first because he agrees with everything my …
ytc_Ugw5r7i7J…
Comment
Absolutely loved the predictions from Sabine, Although they are predictions and may not be 100% correct, but mostly to the point, and I love that we get something conclusive from this channel, you have to be brave to predict.
Counter points I would add is, though not human like intelligence but text being the default median of our professions and work, LLMs do have potential in tasks which are non critical (medical or financial) and have verification loops, they can be automated and are being, and yes 90+% of work 90+% of time is not innovating but reproducing, so LLMs can help there as well.. and also when working with trusted sources, like for my own work or APIs I own/trust, there is no risk of prompt injection as with external communication.
I am not denying any of Sabine points, just showing a way that LLMs can still be productive and effective if not entirely (which is good, as we keep our white color jobs).
Also one point which Sabine didn't mentioned is the limited context window and context rot, which means LLMs don't have infinite memory and all the local context like us and instead we have to load relevant context every time we need them to work on each task, which is a major limitation.
youtube
2025-12-24T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxq7DOfILXcmtB9wGJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgwxBaR_hDLHUlGjbVZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwEtG8t04RNKy6oS3J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugx7pufsNm1it4fVXDR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgyEmFB0_08wWRFkMrR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzuexsL6QizlmwQSTl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgyuIpkqQ0lJDSWcsER4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwTY_ojOdE8e2FFtJ94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugzh3-5JI-iIddVoxU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugytda5P2ZTHMxXQXO54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}]