Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i make albums, write books, paint, ect. I am tired of the bias both for, or agai…
ytc_UgzVBJfY6…
G
Why do we have an overliance on ai and let make predictions on who will be a cri…
ytc_Ugx47dwVy…
G
It isn't a real video
it's ai. And why is there subway surfers?
P.S. have a di…
ytc_UgyuhYQhb…
G
humans also train on images without asking the artist but when a ai does ot its …
ytc_UgwpABGQ6…
G
If im honest all i see in Ai with jobs is poverty, unemployment, lack of income
…
ytc_UgxwrKzhQ…
G
Using AI to source cases to cite seems like a perfectly reasonable idea. Not act…
ytc_Ugw2zBxr0…
G
If we ever get to the utopia then no one will be able to imagine how it could ha…
rdc_d3xaxb3
G
These types of AI automation are also super unreliable. It's only a matter of ti…
rdc_my1acsu
Comment
You are tremendously overestimating capabilities of current AI, especially its reliability. Its widely known fact, AI is not capable of any fact checking, does not complete jobs till the end because with current architecture of LLMs it is simply not possible to reliably guarantee anything. AI at this point does not replace even junior software engineer. But still it is an usefull tool to enhance certain workflows.
youtube
AI Governance
2025-09-03T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw_6vorjHdciMvuOo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyoag5S0730trMSBtt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxX5PHtA-RjjQuz1VV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVQwE1AlbKoXgCQPp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwX8HpldYAUyBheF2x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwL0iro5SIrrDtYdep4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzG0wRV5aHwd6QL4hV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwZ2Y0dFRixIv_1z1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyi3q9ocNY_xJj95Oh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgydoyW9cc4xzUFJfxN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]