Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@benoitchouinard1056second loud incorrect buzzer but not as loud since you're e…
ytr_UgyrrY5B1…
G
I definitely see this being a good reason for tech skepticism, but this isn't th…
rdc_oi2vmt3
G
Once they have taught the AI how to think, they are not deleting the data. Machi…
ytc_Ugyx-P_NG…
G
Ai has already been chosing what facts to put out there, and therefore is taking…
ytc_UgwuZncms…
G
Woman: Aye robot follow tyrese to the oak apartments and if he with that hoe aga…
ytc_UgxoHtJlC…
G
"Generative artificial intelligence uses massive amounts of energy for computati…
ytc_UgwzFTOw7…
G
I always said i hope to be dead the day humanity succeeds to combine robotics, A…
ytc_UgxvBLCJc…
G
Ai can't create new concepts. It's trained on existing data or algorithms. The d…
ytc_Ugw7YABpw…
Comment
What stood out to me here isn’t the idea of “future AI risk” — it’s how quietly authority is already shifting today.
In many organizations, the issue isn’t runaway super-intelligence. It’s the gradual normalization of AI outputs becoming the default starting point for decisions, reviews, and approvals. Once speed and polish become the primary signals of quality, human judgment moves downstream — or disappears entirely.
This is a pattern I’ve been documenting across real workplaces where AI doesn’t need to be autonomous to change who is actually deciding. It only needs to be convenient, consistent, and time-saving.
The real risk is passive adoption without clear responsibility.
If no one can explain why an AI-assisted decision made sense then authority has already drifted, even if no policy was ever changed.
Unexamined delegation is the problem.
youtube
2026-01-28T22:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwJjVPPxRKLk_EhFuV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyoQNZaYdLgmJSUNyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugw2PlaLm0IcC3ThBNN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzLR4Kb7lW-vqQ4VFB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyU7KI1Pz1XtRuQfB94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyoj4NRJNDUbL0GlPV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxnT51F-tccieTZSrJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxAyewDdmXOKWAb8CZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwR2AH_xdmySHHX_nl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzaUD67bAVjkmcXGz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]