Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think "hallucination" is a hugely unhelpful term because it implies that somet…
ytc_UgyE1VLNZ…
G
AI learns at a rate far faster than humans. It doesn’t have to rest. And if you …
ytc_UgyOKDhUE…
G
saying this is a disservice to the many artists with tic disorders that manage t…
ytr_UgyCLEpHD…
G
I see a lot of CLEVER people out there talking about the future of AI, about the…
ytc_UgwAnALOn…
G
The research is flawed! If you’re writing an essay, you must write it yourself. …
ytc_UgzWrJcg4…
G
HELP I JUST BROKE RHE FILTER HAHANSHUQNU9HXN “so you wanna get smashed” IS WHAT …
ytc_Ugz9ISsxq…
G
Seeing Loab at the end excited me. If you guys don’t know I recommend checking o…
ytc_UgxI5A5gO…
G
tbf this is one of the lost battles that i see in AI war. it was 4 or 3 years ag…
ytc_UgzteZyoN…
Comment
In a perfect world, AI wouldn’t reduce the number of jobs as much as it would just allow us to work less hours so we could all get back to enjoying life, being with our families and communities, and generally working on making our communities a better place. Last stage capitalism will never let this happen. We’re doomed as a species if we can’t get around squeezing every penny out of labor.
youtube
AI Governance
2025-06-16T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhPETlAUy35Alrn2J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1nEUgXfIt7LLuejF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgPW6rCy7paYRJuz94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyhKGopKNaLRyK29UJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-GdmuiRRHN2vccCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQxpA_JXRUjwkjySl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0p18807wntT9j7314AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyinjspiVuTNhnzu7p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyeXPx7zO5ARJ4QPrl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwFed3csJsg1KzBfGJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]