Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
On the "ai is more accessible" and the "I'm more of an ideas guy" I feel like it…
ytc_Ugx6CKULl…
G
I am waiting to see how long it takes for elite devs to realize working for a co…
ytc_UgzvE1TtN…
G
Artists have been taking it on the chin for ages. It's time to say "enough!" WE …
ytc_UgzcMtgxn…
G
By the time trump is gone, 20% unemployment and no safety net.The line between w…
ytc_UgzC-n2F8…
G
If, as she says, the robots learn from the mistakes they make, it is considered …
ytr_Ugz7zvP9_…
G
As soon as i finished the video i knew i would find some random mf using this em…
ytc_Ugwd6GJ1r…
G
People should do strikes, widespread strikes that effect every industry so that …
ytc_UgzQrlROE…
G
As an IT professional in the medical field. Medical personnel are idiots. Withou…
ytc_UgwpyF6OB…
Comment
1. Modern AI is not "modelled on the brain". This is poor scientific communication. Neural networks in computing are, to put it most charitably, an over-simplified model of a neurone.
2. LLMs are models of language. They are not intelligent, nor are they models of intelligence. The rest is an illusion on the user's part.
3. Software engineers, what we call programmers, don't just program. They design whole system architectures and have an awareness of their purpose and application, something that LLMs don't do well on a large scale. It's like saying that all plumbers do is unblock pipes with a stick.
youtube
AI Governance
2025-07-02T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy0-rCJww6ReWv2HsB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQO9rYbI1oVgFLvzR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzsbR3_MddZBekK-el4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzZka5wvXUnzowIVcl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy21kjbRwxwF9dNO7t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZurJG6TfZFKvp9TF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdI_EWUvQY3BBMQPh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxAj49gvSjF7fOTt_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAEUHwKn0YEiCpd7F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzN64UVM1LZVa88zG94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}
]