Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@laurentiuvladutmanea you can't imagine anything without a reference and symbols…
ytr_UgxXxVe2N…
G
So basically 99% of people will be on a welfare below the poverty line, having a…
ytc_UgzVgfbdf…
G
I'm glad that academia is finally understanding that communicative learning is m…
ytr_UgzrsRcyz…
G
To be honest here.
I don't think AI art is bad.
But it's when people pretend…
ytc_UgzRZXM_R…
G
We don’t have to use Ai; we choose to. We are encouraged or forced to use it to …
ytc_Ugw-kj5JU…
G
Real art is born out of years or decades of hard work,it is full of passion and …
ytc_UgzorStU7…
G
But if braincells were this efficient then we would be a lot better at everythin…
rdc_jp53ykd
G
Summary for everybody who does not want to wait 1,5 hours:
Key Insights
🤯 Contr…
ytc_UgxeSsPFJ…
Comment
As someone in the AI industry, I couldn't agree more that there's a lot of nuance missing from the conversation. Most of my time is spent talking to people to demonstrate the nuance. What I see is that in real life, AI is creating more work, not less. The inherent trustworthiness creates the need for so many more systems, evaluations, and checks. Then a new model drops, and systems break or drift and need to be fixed. It is just good enough to encourage bad behavior at the end of the day. And it is being run by tech sociopaths.
youtube
AI Governance
2026-04-23T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxLyMFeSGD2PB6Eaqd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyVfDF40vgP9iuIu114AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5nfGdKZPTpdSrvnx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgzQ8EJmG907qtfukiV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwHpN_eHyOqD3CsWah4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugzpwy4uzTQsh098R5p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzwrWP4wflaTreSbQ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyetzeULfoEz1fBPSd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwP1ec9ztJoroclnYx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz0G0MElVH-9mPqFfl4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"fear"}
]