Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Social propaganda and targeted yellow journalism. Accidents are accidents, it's …
ytc_Ugw50dZqQ…
G
I like AI, always use chatGPT for simple task everyday, but stole people art isn…
ytc_UgzRRT6Z3…
G
"Taking the data from one artist's work and replicating it in a different piece"…
ytc_UgzrIACYy…
G
12:27 If ChatGPT's trained by reading the internet, the only content it can "cre…
ytc_UgzuUSm0b…
G
Irrespective of whether one believes AI should be regulated, this regulation wou…
ytc_UgyQ9fChp…
G
Its clear that the interviewer want AI to be real. Its the cool new Shiny thing.…
ytc_Ugy6f957Q…
G
The logical extension of this legal action against AI copyright infringement cou…
ytc_Ugw4jA-KO…
G
Some scientists believe that consciousness is a byproduct of certain kinds of in…
ytr_UgyUFKI19…
Comment
AI won’t directly take over most jobs. Instead, many jobs will simply stop being profitable enough for people to keep doing them. For example, in China there are still workers folding packaging by hand—even though this process was automated decades ago. Why? Because in some cases, a human worker is still cheaper than running an industrial machine.
Those who lose their jobs won’t all be “jobless”—many will shift into managing and maximizing their own assets, or they’ll become AI managers in their fields: lawyers in the legal field, doctors in the medical field, construction managers in construction, and so on. There will always be a need for a human approval mechanism.
A friend of mine works in rail maintenance. He is responsible for giving the final clearance on tracks. If he makes a mistake, lives are at stake. An AI could prepare the safety protocols for him, but in the long run, society and governments will always want a human being to hold responsibility.
Remember this: AI is a tool. It is not legally accountable and most likely never will be treated as such by society. If a hacker launched an attack using AI, we would never put the AI on trial—even if it acted autonomously in some way. The accountability would still rest with the human.
youtube
AI Governance
2025-09-06T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwNDq_tsMSqMvBa8Xd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZB9QShw8WCPCFQr54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUounSPqN01Ws6ypd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyoImCtFtbFgVPTpol4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyFeimDESzg9Ui1cz54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7t5l3WFr_zidwj_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwhdldj_W4elTlcjq94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyEISYygVHkuVPo6wB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwkWKMDqDZ4FyTuqdh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzhjyc7wV2DTl8hRkJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]