Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But We could replace CEO with AI, Who will buy all the products, if the people d…
ytc_UgyUh4_D_…
G
Talk with Google Gemini to get a taste of how helpful ai can be for your creativ…
ytc_Ugz_szQzQ…
G
I was very depressed and talked to chatgpt and they were going to call the polic…
ytc_UgxZu2NWO…
G
If the car is designed to be self-driving entirely. There is no need to have a d…
ytc_Ugxhzhse5…
G
I always check my writing with ai detectors even though I don’t use ai to write …
ytr_UgxwwXcZc…
G
Where many may see danger I see hope, I have pretty much given up on politcal le…
ytc_Ugwo3y6KX…
G
I'm tired of people treating ai like the devil. I suck at drawing. Let me have f…
ytc_UgxUAWCFB…
G
Because its too perfect then it is a robot! Also it cannot swim hahah bcz it wil…
ytc_UgzFBj23S…
Comment
Middle management is actually probably a better use for AI than gruntwork. It can produce optimal schedules and coordinate various departments better than a human can. It can take employee feedback into its decision making, too. And it can analyze market trends better than humans.
So you would just need one human to handle dozens of AI managers to make sure employees aren't exploiting them.
youtube
2024-12-11T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy3HLN9_0OvpGLJ6lF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgynFb8ng71gY6bLaRd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwxQv0LYyxzaj5jfWZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyzdAae5fVMJR3h1z14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwjoPILWw7NbPHHRBB4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwN-T5k5_rg8g5bZTV4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0oM3E61stcBnEwtp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdU5f8BtFQ3eUgccR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTpv6NUfX8BIspI_l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzjprOnAsQOH5V7voN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]