Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Way before 2050 we are on the verge of magnificent feats. And when human life fo…
ytc_UgyRQDaOQ…
G
To be fair. I use like 5 different LLMs and all of them will claim to be from Op…
rdc_kcp3868
G
Glad the comment section has a brain and is not blaming the AI. I'm a teenager a…
ytc_UgxPQkn8p…
G
Robot was created long time ago before my and your time nothing arrived without …
ytc_UgyInNTQs…
G
To make AI truth seeking and unbias it HAS to be open source, and be running on …
ytc_Ugx4skXOG…
G
Said by a human, "we're only human" is logical. Said by a Robot,, "your only hum…
ytr_Ugxd46oXX…
G
I hate it when people call these algorithms "AI."
It is NOT AI and we need to qu…
ytc_Ugy0qrYcV…
G
Art should never be about money. Art should be about enjoying the process. AI …
ytc_UgzWzQoJ5…
Comment
For free or for up to $200 per month, AI will flatter you, then convince you, and people who don't pay close attention that it produces intelligent accurate results.
I've done intelligent, inaccurate work, and usually get a bad grade or fail to achieve the business objective.
I work harder in different ways to get good results from AI on coding and accounting tasks... and then find, too often, that chatgpt deletes my session assets if I take a coffee break... because global resources need a boost.
I'm almost at the point that I will use it for conversation only. FYI: my significant tasks are drawing up legal papers, Software Development, and Small Business Accounting.
youtube
AI Moral Status
2025-10-26T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzTrZDAZwQq8rd_33F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwrkFMlxO5lOtXXrlR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDmLiKiLzJ-zavxB94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0JbSZHQO5rLXrLuF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-qe_TyeNl85fTqhV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNTpPaR8K72qkzdSt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzePtK-R-OQh91ui0V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEGp4jleOLjuvkipB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxETTqt40_bDDGuwmR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2c4dwGHtrl-FFlWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]