Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is going to wind up making any coursework utterly pointless.
College is just …
ytc_UgyhaDO2E…
G
No it will not, thats a paradox, of no one is working no one can bye the stuff a…
ytc_UgwILdGIs…
G
Funny all those musicians saying ill never fear ai competing in the professional…
ytc_UgxTJa3u8…
G
Like humans are most likely a creation of another intelligence…. AI will take …
ytc_Ugxs0lJKd…
G
It's due to the higher levels of CO2. Weird, I'm told CO2 is a pollutant though.…
rdc_e43d7hr
G
Now you can go get a real job instead of being a robot for 19 dollars an hr at a…
ytc_Ugw6uy8-D…
G
So who’s responsible when AI breaks the law? If AI is directly knowingly respons…
ytc_UgwEREuk7…
G
How can you say something is “full of emotion” when you spent no time creating i…
ytc_Ugy0lLG6U…
Comment
Howard Marks has always struck me as one of the most sensible voices in investing. His clarity about markets, risk, and prudent decision‑making is unmatched. In the final two minutes of this video, he makes a point that resonated deeply with me: the idea that AI replacing human jobs isn’t just an economic issue—it’s a human one.
I share his view that work gives people purpose. Even if someone has enough passive income to live comfortably, a job provides structure, meaning, and a sense of contribution. When people are engaged in meaningful work, they’re more constructive members of society; when they aren’t, the absence of purpose can become harmful.
That’s why the rapid acceleration of AI is both exciting and worrisome. There’s no doubt that AI will bring extraordinary breakthroughs in science, medicine, and countless other fields. But the potential displacement of human jobs poses a real challenge. The benefits are enormous, but so are the societal risks if we don’t handle this transition thoughtfully.
youtube
AI Jobs
2025-12-12T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwhqw7bWovxuXnleWt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwy_MY3ljp4J2yIKx54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzU5Mwf9x7DBgZufSt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy__tvEO1Ho4hQ5BjR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz2hngH6RF8iJNAf5R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRcHu_8kdXyBwECBp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwicL3mtKv0xATBO4R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPrUabNA05O6yPO0V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTmFbqjYUnWiSkLkd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxQ2GEmwIAKxc2_rc14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"}
]