Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Google, right now, "are tesla self driving cars safer than humans" and top resul…
ytc_UgwcfQchZ…
G
Hi, my name is Disc, a 14 year old artist who has dreams of animating for cartoo…
ytc_Ugypyp0cH…
G
AI code generator will learn from 30 years of code done in the application and s…
ytc_UgxNOMtSI…
G
I've said this for years.
Within 5-10 years deepfakes will be so common and so …
ytc_Ugz8m_wep…
G
Not a mathematician by any stretch of the imagination but even i can figure out …
ytc_UgyaJ-Sco…
G
I don't know, isn't literally EVERYTHING generated by AI "learning" from human c…
ytc_Ugydlc8Dz…
G
I dont think so at all, if he works with AI 2027, this isn't a "Woo profits!" ki…
ytr_Ugy3rlvBd…
G
how do you use ai for your art? (not tryna start a debate, just curious how you …
ytr_Ugyrk2KdD…
Comment
It would've been good, considering a lot of your sources are people heavily invested in AI, to talk about the financial incentives involved in fear mongering about AI's possibilities. It seems like, for a reason I can't understand, we're assuming that the people working in this field are both intelligent enough to accurately predict what AI is capable of, while also being willing to completely wipe out humanity based on short term gain. And then, on the other side of the coin, you have people who are politically invested in seeing AI regulated for any number of reasons and are therefore heavily incentivized to distort reality in order to achieve certain policy outcomes. It feels like we need a bit more than just statements from AI CEOs to determine whether or not they're actually trying to create unsupervised self-improving AI systems. I mean, I agree with the ultimate political outcome here of regulating AI companies so we're sure they aren't doing stupid stuff that will harm humanity, but it feels like we're doing a bit of propaganda here and that doesn't sit well with me...
youtube
AI Governance
2025-08-26T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugwm68MALyX4azap4IN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz0HyYtSghRnpLPtRF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKr3IZk6iHO7VUO5p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyBeeQz0s2htc1MPTt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzb3ixO1zczy632JjJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]