Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
DID HE REALLY THINK HE WAS GONNA KNOCK OUT A ROBOT OR EVEN MAKE THE ROBOT FEEL P…
ytc_Ugy7-fpPD…
G
I have a very unique style that's taken me literal years to make. I have so few …
ytc_UgzkENFue…
G
For those people who dont know sora AI is a program where you can make video all…
ytc_Ugz3GMuSe…
G
This feels so wholesome Ive grown up watching ghibli and its hart breaking to se…
ytc_UgzLuAW3V…
G
This is the saddest vid I have watched in a long time. The obvious difference i…
ytc_Ugwwb7GYW…
G
No, and it never will. There is no consciousness—only dead code written by other…
ytc_UgxNuQ4sN…
G
If AI will be better at things than humans, such as chess and analyzing trends a…
ytc_UgwaMsffW…
G
Chatgpt was asked about the oldest universities in the world. The answer exclude…
ytc_Ugyd25JwV…
Comment
The problem I think is that ai is currently being designed to replace human work not to supplement it. Specialty ai systems for detecting cancer will not replace your doctors but generalized ai that targets white collar work will almost certainly be used to shrink staff. The evidence of this I think is obvious. The most expensive part of an organization in the US has been labor. That is why labor was sent off shore. generalized Ai is the exact same thing it will be used to lower the costs a business has. As for allowing people to work less and enjoy more benefits . . . there is absolute no chance that happens. It has been nearly 50 years since computers revolutionized modern jobs and the hours people work has not come down.
youtube
AI Jobs
2025-10-07T16:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxrF2BfL85SR17vg7x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxdtPYsWJruyKcUgtd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxAUvYHCAv4WFHw-RB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzsfJ-Zoyqw4cnFSZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzr1n9epVxKPm2o6IV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwD6QHjJJ0NXZI-LEd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxDCiaDUp3LDg_aH6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPJy-18x0aDs3s-5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDxDhR2weSPFIsgst4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwsJVaGl4kVwSW-8G14AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"approval"}
]