Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@jeanniep1003 agreed. But what they don’t seem to understand is that ai billiona…
ytr_UgwuF7Aac…
G
Current AI uses the collective knowledge by humans all these centuries to provid…
ytc_Ugy7gOFDb…
G
And what do American human taxi drivers think about self driving cars ?
Here, in…
ytc_UgyU2hXdM…
G
How long until someone creates a fake GPT that just pipes inputs to Chat GPT (or…
ytc_UgytlVkXF…
G
Some kind of weird AI-Robot kid in the future will love the Human Museum the way…
ytc_UgxUYH6YB…
G
Ai can’t become conscious in a human sense EVER, it can however form bias from b…
ytc_Ugw02ETPf…
G
Sue the city, county, state, and the AI company. Sue the mayor and police chief …
rdc_oa5ctvv
G
AI should NOT be encouraging anyone to commit suicide! It should be set up to ke…
ytc_UgyAsMSds…
Comment
Not to hate Max on these, but honestly it's tiring to always see contents putting AI in a bad light. Associating it with fear and job insecurity. While these can be a worst case scenario in a dystopian era, it seems we fail to see that any form of work would still in essence need humanity. No matter how calculated and data fed AI is, it will never be 100% humanize. It will never undergo life and gain experience from life like humans. At most AI when refined will only be a helpful tool for us humans to do their jobs.
Like can you imagine trusting your health to an AI alone instead of a human doctor? Why is it that customer chat support would still have an option to talk with a "live agent?"
Humans would still be needed to facilitate the use of these AI inventions.
The question should not be if AI will replace us, but rather how AI will reshape and refine our jobs.
youtube
AI Jobs
2025-09-09T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgySSJNVlHJHT30My8J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTIekQLv0Hy2wFzjd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2DyKGcfxeSrnGeS54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwFdjjTas6FvulgmV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3nCOrvgPPOGaj0vJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWBszzc6apaVYcCa54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyXvsxpcEw7p1TGKZZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz2AXzQTwiqTDVn_u94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxaSUulL-3XErjeIRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKAe5lvbu0bmJxK114AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]