Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, if you ask AI a question without establishing a role it'll basically read ou…
ytc_UgxhInlJ8…
G
Truck driver old timers all gone new bread are want be truck drivers now you got…
ytc_UgyFX-L5F…
G
"ai artist" that do not exist dude, that thing do not deserve tô be named a arti…
ytc_UgzMRkMli…
G
AI can actually be really cool unlock really good I bet if I showed you one of t…
ytc_Ugx5jYHds…
G
Only half joking here, but one important step in AI seems to have repeatedly bee…
ytc_UgxmiDdYR…
G
The current AI are nowhere near that dangerous or "intelligent". They can't resc…
ytc_Ugwqzvij_…
G
An intelligent and wise human being has high empathy, shouldn’t a super intellig…
ytc_UgydjZ9Mg…
G
All AI output is derivative. It is just a very cleaver indexing system. IT has n…
ytc_Ugx_i6ZD2…
Comment
People who doubt the potential of AI are probably over-esteeming the capabilities of humans. Scientists and philosophers both question what consciousness means, whether humans are more than biological algorithm machines, and whether free will exists. Humanity has had roughly 2 million years to iterate through natural selection; AI has had maybe 20. Where will AI be in 5 years? In another 20? Consider also the immense wealth and influence that corporations are able to accumulate simply because they live longer than humans. Will AI not be the same?
youtube
AI Jobs
2026-02-28T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzALN27_NZfaLSm-FN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyv3huT81qVQMuqHDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwexrrUWv0Q1mEPuNF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwwR8uhbjuhNaFUlOR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmkO011abPrhD9DeJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy6bynxjBkLkDmBvPV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgybOZ-I1mayxWzuHJB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzQLd6MNweCbM3FZVh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKHVVvU0iBApdJRPZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5A0hrYrnvc-td47d4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"liability","emotion":"approval"}
]