Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There will be no incentive to keep humans since they cannot exploit their labor …
ytc_Ugw7MMTlz…
G
Generative AI is not art. It never will be art. It is a cheap soulless mockery o…
ytc_UgwfxSP0w…
G
Wolves evolved into dogs which allowed their species to preserve itself just lik…
ytc_UgxGZgInB…
G
How will AI replace a lawyer ?? In India, it can at best remove the illiterate t…
ytc_Ugz3KeMqc…
G
So basically the people who doesn't know how to draw criticized you how you suck…
ytc_UgzqMNbyo…
G
aussi, si t'as bien ecoute, l'IA ne remplacera pas les metiers mal payes, deja r…
ytr_UgxbLpMuX…
G
@missange4701 They use an algorithm that makes art based on bits and pieces of p…
ytr_UgxK3gevL…
G
They wont because companies WONT own the code. Openai ext will own the code so i…
ytc_UgwrfU78o…
Comment
No, AI doesn't "know" how to lie and cheat. That's a surreal interpretation. AI doesn't know what's real and what's not real - when it tells you something that's totally wrong, there's no intent behind it - it just tells you the best match to the question you asked from its training material. If its training happens to be right, the answers will be right, but if the training is wrong, so will the answers - and that's not "lying" because lying requires intent.
Same with cheating. Again, cheating requires intent which current Ai doesn't have. It will propose the shortest path solution regardless of the desirability of that path unless the prompter puts in the right guardrails.
youtube
AI Responsibility
2025-05-22T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxTPcnjewHrxloH9_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxQdS6GVHoOo8qr-Cl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyk18TDRtGDyGZaN4J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyq1NL3UHG6xAu2TWx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCeS_ZnG4GyXt8Lox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwOqv7euCg9rOJDBfV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBcMzpQGe2cRGlPQR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwC4jiqiJFD8b-G-yd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9Cg1mCOtN6Ax1pU94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfnjYiBMwrhUqoTYt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]