Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One thing I know is… AI can never replace developers, it just makes us more prod…
ytc_Ugyo5r4kH…
G
You think any of that shit is good?
And yes, actually, the writing *is* AI. If …
ytc_UgyVt_OSH…
G
There are so many examples of AI mimicking a watermark, which is proof that it’s…
ytc_UgwgMloOi…
G
Ai has no skill. You can't replace a person's lifetime and sacrifice or contribu…
ytc_UgxFFi2gL…
G
What a smart and well articulated woman! I understood more from her than from al…
ytc_UgxgdEpUm…
G
AI is part of automation, automation is not done. Best outcome would be that we …
ytc_UgyEl2Od0…
G
Stopping super intelligence being created is what all of the climate change acti…
ytc_Ugyrq8Jr4…
G
I love Yang. One thing I also find concerning is that the code AI generates beco…
ytc_UgwqwZTjO…
Comment
This is incredibly eye-opening. 😳 Stuart Russell lays out the stakes so clearly—AI isn’t just about convenience or business; it could fundamentally shape the future of humanity. The “gorilla problem” and the risks of AGI really make you think about who’s really in control and what responsibility we all have. Thank you for sharing this—so important for everyone to watch and reflect on. 🙏
youtube
AI Governance
2025-12-07T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxRMlkPWGZmJGP-Let4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRrW1If8xX27oRAgx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxGO4IXsZSM7ncU14Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRt46Pmx0VD_lrllp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQtHxKf06CvG_5N294AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-1_DRHgpA2F-C5RN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxH2mgWIi_roUFOzht4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1Xt9-0rHI93CwGip4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdL1inWvEHlyr3gvV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7wnUK14_gKgXp9mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]