Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah that's the part I hate the most. These people probably couldn't even draw a…
ytr_UgxjP25VX…
G
*Entertainment people be like:* "How dare the AI produce mindless garbage to num…
ytc_UgxZxVIzR…
G
This is good, maybe thanks to AI finally they won't rule out female heart attack…
ytc_UgxniEVKH…
G
I say let's fight fire with fire deepfake of all the politicians especially men …
ytc_UgxVtKWdm…
G
This is nothing but click bait fear pron. There's no such thing as AI, it's a ma…
ytc_UgxPmjNBV…
G
Nothing new can come out of that AI-meat-grinder. Its a nice WOW-effect for digi…
ytc_Ugx84COEW…
G
the life skills courses are good, but not the teacherless AI academic classes. M…
ytc_UgyvS-uE7…
G
A.I has been around for thousands of years and controls humanity for reason not …
ytc_UgxJBxPac…
Comment
Hell Sabine ! I prompted chatgpt to answer to today's transcript : here is his contribution :
Sabine, sharp critique—but three points deserve nuance. First, LLMs are not purely purpose-bound: large transformers already show cross-domain transfer (text→code→vision) through shared latent representations. Second, “interpolation only” is overstated: systems like AlphaFold or recent math-reasoning models show genuine structural generalization beyond their training examples. Third, prompt injection is largely an interface and architecture issue; emerging agent systems separate policy, memory, tools, and execution layers to mitigate it. AGI likely won’t be just LLMs—but they may become the cognitive substrate within broader world-model-based agent architectures. Chatgpt auto
youtube
2026-03-04T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgznwGLPmJygeYfPA1F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx5p1jc44zl-WcLPhx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5zdNyAcrFimL0Qnt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzKXn51KqUarcSlT4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwGmSx3jc6a6U37w9F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz0iv3Y-StBNwTyhIF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy4U-EeifEDIibR0CV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwsMTNkDq_TU5spaQx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSIfPzjHX5MCeKI9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugwo0n8FSQq2MmLtI0x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]