Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>Seems to be a game changer w Russia on the ropes?
it'd be a game changer li…
rdc_oi4cgv6
G
This AI bubble will burst soon and it’s sad we depend on AI this much…
ytc_Ugw0lV7_c…
G
I'm writing a novel about AI+robotics now - when they attain sentience and self-…
ytr_UgwMvQp3z…
G
100% agree. How many times does AI have try to refactor your entire codebase to …
ytc_UgxtPraVG…
G
My daughter went to a school like this and it was perfect for her. She loves pus…
ytc_UgyCdzUe6…
G
yeah, turnitin’s stricter now. i use Winston AI to help humanize my text and cat…
ytc_Ugz606sMz…
G
This stupid ai cat you see every to tell you to like and subscribe: PlASe LIke A…
ytc_Ugy7WXHTV…
G
Edward Snowden tried to warn everyone of this. When the NSA was tracking everyon…
rdc_fejumop
Comment
His apocalyptic view, while extreme, ironically validates my suspicion about the industry: the 'AI safety' narrative has become the new marketing hook to reel in investments, replacing AGI as the ultimate clipbait.
This cycle perpetuates the Superintelligence race the expert himself denounces. Thus, the ecosystem continues to glorify the programmer investor and centralize value, while artists and creatives are still forced to learn models that are fundamentally planning our obsolescence with the sole aim of maximizing profits.
The real conversation isn't about the existential risk; it's about the systemic disloyalty of those who sell us the tools while actively working towards our replacement.
youtube
AI Governance
2025-09-30T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6k78L2xakgXCXXcJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtSk-qNSww4QXYwxd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxUCd9PIx7P1GXy-v54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8WtVEsXBHKNC-_aZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgycjWTfNt0wWTGgBDl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxy9YbivjscvEe6vfV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyQ0qlMh3V38cwFhRd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLnHrQ8Gk_hCUbfCl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw27x6MHmtVRqp0zoJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyV21cfOdj4OKbjqSB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]