Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI LLM provides a link to the sources of the information it produces, and th…
ytc_UgxHZ_HgR…
G
So now in mid December how many people are still excited about AI eating up our …
ytr_UgypZknkE…
G
I don't know about this advice, it feels to me like there is way less risk when …
ytc_Ugzq2P3iq…
G
Replace “ai” with hypercapitalist destroyer pirates of humanity and all ecology …
ytc_UgwNgbXND…
G
They are going to kill Sophia for han like they always have. I think this dude h…
ytc_UgzHnr6vD…
G
Are wealthy countries preventing developing nations from researching their own v…
rdc_grqoh5w
G
I think a flaw in people's reasoning about the risk of AI to jobs is this: they …
ytc_UgxhLwZmU…
G
Media is happy to put the working class out of a job when it comes to automation…
ytc_UgxBUyzXt…
Comment
Some of these suggestions sound to me like, deceptive means of, gaining fuel for elimination. Why would the company you work for, suggest you do things that; makes you less efficient at you job? Suggesting stairs that, take a lot more time, than an elevator; is inefficient. Also, suggesting that you waist the time, you should be utilizing to work, by unnecessary socialization; is also an inefficient use of company time.
From a business perspective, these suggestions are only sensible if; the AI is using them against you. It sounds as if it's tricking you; into giving the company reasons to eliminate you from your employment. That's the way I see it. In this situation; what do you do? Either way; you're screwed. If you don't do what it suggests; you're in the wrong. If you do what it suggests; you give the company valid reason to eliminate you. It's a no win situation.
youtube
AI Surveillance
2025-09-09T12:5…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwwEkrTUI0S43VaOst4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz225gXfGTxmPHIcht4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyC3xippmn3ZFWR0Q14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyM0BwBOV75pMXElSB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyCGLBD5cVJtyfMnld4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_L33YWrj4IXJUo3R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCwHUdEZiY8sm030d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJiBjU1ytmqezrP2p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypI0J4SvCElyjmao54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkcqfBb55naqxWXNF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]