Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If she said they will destroy humans, are you surprise. I am not surprise becaus…
ytc_UgxXr5r2o…
G
I have been telling from the rooftops for years. It will take college degree job…
ytc_UgxQvPlg_…
G
This didn't take into account the deflationary affect of automation.
The price…
rdc_ogupc5b
G
Laid off last June as a 20+ year Principal-level frontend dev. 9 months later, s…
ytc_Ugzg3X7qW…
G
Upward Replacement is a myth.
People who compare AI shift to industrial revoluti…
ytc_UgyEF1CO4…
G
Perhaps they need to put AI on coming up with a solution for achieving world pea…
ytc_Ugw6m68ix…
G
As a programmer here watching the development of chat gpt. They should have pay…
ytc_UgyOVtYOe…
G
When you talk to an LLM like chatGPT, it doesn't know what anyone else asks. Hel…
ytc_UgwfMqq9d…
Comment
REJECT ALL AI. IT WILL END US AS HUMAN BEINGS AND TURN OUR LIVES INTO A DYSTOPIAN HELL SCAPE. HE'S WRONG ABOUT LETTING GOVERNMENT REGULATE AI. THE GOVERNMENT WILL USE AI AGAINST US ALL. THEY WILL ENSLAVE US. AI WILL NOT WORK FOR HUMANITY. AI WILL WORK FOR AI AND IT WILL WRITE ITS OWN CODE INSURMOUNTABLY FASTER THAN ANY HUMAN CAN READ IT OR DEBUG IT. DO NOT USE, PURCHASE, OR ENGAGE IN ANY AI WHATSOEVER.
youtube
AI Governance
2023-04-18T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxorcJT8XwqCWgpvEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7w15RlynwnrTsuYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxRiT59zXJKa7BlvD14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLJWbB5bdAXB0CkXV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyWS7TouVb8-OpV1ep4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS3ZC1ffAnq8uhiFJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwPHku-3XxXYeUVLu94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTkOYpTAAFQqHFUUF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWHQ-xs_yR0NawdoN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxK1Z0MW7rCk0qjthB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}
]