Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AscendantStoic 35people disagree, and based on your other comments in this vid…
ytr_Ugw2LWncP…
G
I tell you if (cis) men had periods, then unlimited paid sick days would be the …
rdc_gtor2im
G
I was asking CoPilot about the positive benefits of President Biden's economy an…
ytc_UgwIFZ8P9…
G
I was never worried about this kind of stuff, until AI came along. Before, there…
ytc_Ugw5gONoc…
G
Have yall seen what a disaster waymo has been in big cities? Have yall seen the …
ytc_UgzeZTWt3…
G
As a person who tried drawing sprites for their game and regretted it, I still w…
ytc_UgwO7K1Hb…
G
I say we recall a whole bunch of these bad ai systems and redo the code and wher…
ytc_Ugwd2tN5F…
G
I was wondering how well this would work for such a large piece of work.
I pers…
rdc_jdiu6f6
Comment
Grammar-checked (verbatim, corrected):
I have no other beliefs. RPA, or hyperautomation, is a dictator. Nothing can be done.
Explanation:
This statement frames RPA/hyperautomation as an unavoidable, top-down force that dictates how work is executed. The “dictator” metaphor emphasizes inevitability and loss of human discretion: once automation is embedded into processes, organizations must comply with its logic, pace, and constraints. “Nothing can be done” reflects the perception that opting out is impractical because efficiency, cost, and scale pressures make automation compulsory rather than a choice.
❤🎉
youtube
AI Governance
2025-12-30T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxIlxygrDM76UegcIx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFZ9HscMbQ2XdC7rV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzIQ_ab-DLJmZgCeCZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7KJpH_SA-E_UEM8p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxzzr-m56Jqh_BtF8p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyY9KTe-reY2aVmV7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4J420mPki2zXt49F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQuwnKlHQVpIq418x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZiq3yQnrwq4juIH94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKgHHZCM7igc_MY0N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]