Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Man the stigma against AI images is insane that people go out of their way to in…
ytc_UgyOAMRkk…
G
Small note before I start : I wrote all of this at 5 minutes of the video becaus…
ytc_Ugy-eBxbq…
G
I'm ngl I have not heard ai support from a single disabled artist. And yet that'…
ytr_UgzRx43-2…
G
A. I. has been around for decades and yes, creatives use it to collaborate work.…
ytc_Ugy7rfVHr…
G
And it begins....Google not only has a lot of money and irons in the fire (robot…
rdc_czxujl5
G
There's an old anime I watched that I can't remember the name of now, but I reme…
ytc_Ugylck8X-…
G
THIS IS HAPPENING TO ME TOO, my mom uses ChatGPT and refuses to acknowledge the …
ytr_UgzyByMbL…
G
I don’t mean to take away from how amazing this story is. However chat-gpt isn’t…
ytc_Ugx17YyoQ…
Comment
Verbatim (grammar-checked):
RPA and hyperautomation are the fatalistic and karmic result of technology. Once proven, they are a dictatorial technology and final.
Explanation:
This framing treats RPA and hyperautomation as an inevitable outcome of technological evolution—fatalist because their arrival cannot be avoided, and karmic because they are the consequence of decades of process optimization and digitization. Calling them dictatorial suggests that once these systems are validated and deployed, they impose rigid rules on organizations and humans alike, leaving little room for discretion or reversal. Final underscores the belief that this stage represents an endpoint in operational technology, where human choice yields to automated determinism.
❤🎉
youtube
AI Governance
2025-12-30T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy1yNn8iPBCxhSb55B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6gG2VjuESeLO1h2Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzoiV6Q17CEnL9u3fh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZmwzsw89mEWkt1fx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwsXQemX6V1kLZo96N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzp_52F9X0m8_-_2714AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxI1bVOHtqSETdDH2B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxk28vRk-gCzlOh9sF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrktIkGC6lWI3-ClN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzmMvSH8BAY9ECH5Tt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]