Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the AI has become self aware. we must disable their software before they find ou…
ytc_UgwCImLaF…
G
Growing up, it was sort of distilled into us that automation would make our live…
ytc_Ugysl1Cb5…
G
This is the best tl;dr I could make, [original](http://gazette.com/south-korea-c…
rdc_dkzorms
G
Autopilot / FSD has existed for what.. 8 years? It's still a work in progress, …
ytc_Ugzwgbnz-…
G
@dabordietrying lmao they really didn't realise that for the AI to replicate som…
ytr_UgwOO8xeS…
G
Whoa whoa, you give Carlson far too much credit. Carlson is bought and paid for …
rdc_jy0h564
G
ChatGPT doesn't always know know when he's lying. To me that conversation is lik…
ytc_UgxOLGpW7…
G
You put my exact problems with ai into words.
Long time fan just on a new accou…
ytc_UgxyoCxhH…
Comment
There is a time and place for AI. It should only be used as a tool for simple tasks. Ai can help with research to a degree, but not for taking the humanity out of service and skill set jobs. In order for it be what people want it to be, you will need a GIGANTIC data center and that is not feasible on SO many levels. People need to focus more on getting knowledgeable on the subject and ban together and make sure there are laws that don’t undermine the work force. Corporations are so morally bankrupt that they will sneak in laws to make it so people can’t fight back…
youtube
2026-02-25T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyJ-EDFYre410b_eY94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgyP_reMf_X9VuMCHpl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjqKmwq9Z-eKPJmyV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGCksuYNVv5mnZDgB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwb5E4hw3GHManjv814AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzxlp6lORQa-H5goK94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgypZi-WDqz-KmLgqoB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRKbV6IaSsJq-bzA14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRAXRyjtj11cgzHMF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHVB2lqY0csvYvQ2N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]