Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Damn people are getting so tired of AI pictures that people are now poisoning th…
ytc_UgzAZXu25…
G
If you can’t parallel park, you shouldn’t have a drivers license: “…but my car h…
ytc_UgxHxuKTq…
G
We all did this by ourselves by starting to buy online because we wanted things …
ytc_Ugx17o3_y…
G
@ how is this any different than the BS kwebelkop content they literally were cr…
ytr_Ugy-D6yBX…
G
AI is like a child, you can train it to become what you want, if you dont, some …
ytc_UgxRhW9tB…
G
What’s funny is people think our government that literally can’t seem to do anyt…
ytc_Ugygs9-wI…
G
Going back to using our brains and critical thinking skills is better than ai, b…
ytc_UgwohcxUw…
G
All I heard was free car. Where's my free car? I'll tell them my deepest darkest…
ytc_Ugw7UnAIV…
Comment
From around 46 minutes in where Daniel asks about an example, I'd point to OpenClaw - less about the feature sets than about how multitudes of people (and leading AI companies) embraced wildly unsafe practices for a technology that even at its best, has no safety features other than its technical limitations. i may have missed it (or if you did I'd caveat that conversations often progress quickly), but I'd also support that by trying to help people see how AI is and will increasingly become the core operator, facilitator and potentially agent of prime responsibility for all other major risks.`
There are parallels between not building super intelligence and not further polluting (or experimenting on) our atmosphere with greenhouse gasses. They're different animals but fundamentally the same reasoning goes to just not getting into those situations in the first place.
youtube
AI Governance
2026-03-11T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxPkIgsNXF30aWQnJt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQgFB3FIL2RHGZ4jV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_Ugycd9SlpC2nIiSmjx54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2BsgSvA7qkOT81a54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx20P5Xzj0BvwMxSqF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZDYgTm0gbtBFBLsl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx8NvYRWnRxewpYUPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzgZVrn7axdpU5BL0d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRDdsg5tnsuygonbB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyBV9QmRgTXrm2zEcl4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}
]