Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@jimbrown2350 how do you know AI has had access to the internet for twenty years…
ytr_UgzFIbXaX…
G
You just don’t hop n the ring with a robot especially without head gear boi went…
ytc_Ugwiz8fBP…
G
@averycuulduckdj so basically the androids become sentient and revolt against H…
ytr_UgxA9lFUU…
G
What about if the AI computer’s start writing the robots code for them instead o…
ytc_UgxDmfKhV…
G
Wouldn't it be appropriate to cover Sophia's head with a wig, and cloth the bod…
ytc_UgyQ3s3mv…
G
AI is here to stay, and expand. Like most things there is good and bad to it. Ca…
ytc_UgyOMPW3e…
G
I think we're past the point of categorizing AI as a mere tool if we resort to a…
ytc_UgyLV0pQB…
G
As Anton talks about the future of work, I'm just here appreciating how Pneumati…
ytc_UgyHJXCwx…
Comment
All that really is missing is the middleware that connects jobs to AI. for instance in IT handing IT adminstration: Setting up new users, help desk support, VM deployment, patch management, Same for other departments: Sales, marketing, HR, etc.
The biggest risk is prompt or AI injection, which enables a hacker to trick an AI system into doing nefarious tasks: Wire transfer $5M to this offshore bank account, or run this randsomeware on all of the companies servers. Or AI leaking confidential information. This is really the only factor that will prevent or delay widespread AI deployment. Systems will need to be configured to limit the AI capabilities in a way that prevents the AI from circumnavigating security features.
youtube
AI Jobs
2026-02-24T17:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgySkEnSxUA4hLz41hF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCf6lulGBXfBgdAyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhoeGG9FWyxdDa1Td4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuMLSL0R-3iPTwlZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKCFukqDsEIhjDOMt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxQrw-eGKZbVnLmwpJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFR3AVNls0KTgOtKl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwfZ4vOIMxG9Yp2HUt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyu-0yUAqCP6H3Dwnd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3oiciQzRwGIceFmV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]