Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Automated trucks would probably be best utilized on highways. Once they enter de…
ytc_UggGeObDK…
G
“If you’re gifted with code, design, AI, or digital tools — this is your calling…
ytc_Ugz-KfN0E…
G
Yeap, it definitely is. Which is even more of an issue than the whole ai jig. To…
ytr_Ugwg2cGz7…
G
AI though is quite an area for inspiration, rather than posting the art the robo…
ytc_Ugyg9YKMO…
G
I use AI art for visualizing D&D characters, NPCs and environments. I think that…
ytc_UgwyQWZzf…
G
Schizophrenia is too strong. Leave AI alone! Many creators that want to concentr…
ytc_UgzfEQ78x…
G
I cannot imagine a world with a universal income for all..i mean it would be lik…
ytc_UgzSs337o…
G
"Some neuroscientists believe that any sufficiently advanced system can generate…
ytc_Ugi_n0NFA…
Comment
Chatgpt is actually correct here. You wouldn’t want your doctors to practice cutting people open with butter knives being held by chop sticks.
It can be unethical because it has the potential to teach you how to write code that can be more easily exploitable, or allows you to practice writing code with serious security concerns. Saying “write shitty Python code” falls under that umbrella because poorly written code often has many vulnerabilities. Computer science and writing code is an art. One wouldn’t practice how to perform surgery incorrectly due to many health concerns. The same applies to writing systems or simple scripts.
In the US, Computer Scientists take ethic courses that pertain to writing ethical code or making ethical decisions in the computing world. When you write code for someone, that someone trusts you know what you’re doing for security concerns amongst others. Learning how to write clean code has many benefits but practicing how to write bad code is just bad lol.
reddit
AI Responsibility
1690507095.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jtzccwi","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_jtqttjm","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_jtrtwz6","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jttfgbf","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"rdc_jtqy2q6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]