Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for your comment, @karengatunstall265! AI has come a long way, so why not…
ytr_UgztTOSKK…
G
I have sometimes been thinking of the situations when you by misstake misspell w…
ytc_Ugy5QNFi_…
G
> You would have to be incredibly naive to think that every military power in…
rdc_gs5ufqy
G
The issue here is this isn’t a loom or atm that can only do one task. Ai will re…
ytc_UgyWQlYn6…
G
So open AI is following the same path as "novel coronaviruses & vaccines", but m…
ytc_Ugyes-3Q0…
G
AI doesn’t get programmed the same we build an algorithm. The only thing that wa…
ytr_Ugz-YGRN3…
G
People who want ai in their business will destroy mankind. Everyone will be ok w…
ytc_UgyLSxZRY…
G
Yea any ai voice cloner should add like a watermark like this voice was made by …
ytr_UgyME6sD4…
Comment
This guy feels pretty detached from reality...
On “AI writes better prompts”: give LLM an open‑ended brief for a complex, multi‑step app and compare that to a good human spec and design. Humans still win on context, trade‑offs, UX, edge cases, and long‑term maintainability. Until an AI can reliably own those parts, “it writes better prompts than humans” feels premature at best.
On “robotic plumbers by 2030”: walking demos in pristine labs are nowhere near sending a humanoid robot into fifty‑year‑old buildings full of weird plumbing, cramped spaces, water, dirt, local regulations, and real liability, at a price that beats human trades. Claiming we’ll have full replacement‑level “robot plumbers” within five years, without large‑scale real‑world deployment, is exactly the kind of detached‑from‑reality forecasting that boils down to “trust me, bro."
youtube
AI Governance
2026-03-18T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzk8WM6xxB5MNhuPBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxSme7J1XVuYPGKSzt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJeUpzaDi997Rb9YJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw8B_otFoJBkVh_COx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKlovs6cD9-Z3lrW94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxTtVzeeAQngRwjNTx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyKDauE0224Q9u7JoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzP4Hm3JzxAWekAk4x4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwrg1_dLENOjkdDJyp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtF8G42jIt_VdHaLN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]