Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI art gets more hate that it deserves (mostly in the fear of job loss) but I th…
ytc_UgwOphxqU…
G
You can't make AI safe because of the people that will be using it. Like the Hac…
ytc_UgwdDZwCw…
G
Maybe it's because I studied art history, but I don't understand why is the art …
ytc_Ugw1WXMrt…
G
It is an issue because people want a 8 to 3, or 9 to 5 job in an office with hea…
ytc_UgwnLWkDA…
G
I'm gonna make an ai that takes care of me and my household while i work a 9 to …
ytc_UgxVZ0xx1…
G
Very true. I'd edit your final comment "It will be companies struggling to find…
rdc_nbkiub9
G
Tesla will NEVER "teach" all driving situations to the AI computer. Because they…
ytc_UgzSNAVO4…
G
We would be fighting each other because it’s out psychopathic sociopathic greedy…
ytc_UgysC4Hv4…
Comment
While AI will certainly have lasting impact, from a technical point of view, it is currently un-delivering massively in key areas. There are already problems with scaling the LLM models further, data result quality and security. Also AI operation is hugely costly and mostly not profitable. Actually multiple technologies are classified as AI, not just LLMs. By 2027 the AI bubble may be bursting, but that remains to be seen. Characters like Musk and Altman are sales people with a track record of sketchy, outlandish claims. "Our product is so good it will destroy society" has a grain of truth, blown out of proportion for marketing. We should stop putting these loudmouths on a pedestal and tax them properly.
I'm just afraid that if 2025 taught us anything: just because something is idiotic and doomed to fail, it does not mean that people won't attempt it.
youtube
AI Jobs
2025-10-24T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7Qp4J0FJqhXVyALJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAlJL59n5hLUazRuZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNoEghalDfIhr3Bhp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwoPij75HAAxEVcy1t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPlIRiSXATdaqt6ON4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxUC_ZE_TdylD1mLrl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBzdSTchba-kbTNYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyInr6JDd0DekDyfcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2fQ6MKWdUDy4N5SR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx341QZlrELVCUgycd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]