Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why not shut it down, now if we know only few of us will get a job if we let it …
ytc_UgwPw2-Vx…
G
Looooooool, same for art, in all honesty. People who depend entirely on AI to ge…
ytr_Ugx2c0Mb5…
G
Thankfully his Drs came to this diagnosis without using AI? My GP uses Wikipedia…
ytc_UgyuoRWYV…
G
AI is also starting to infest literature - and someone posted a very simple, yet…
ytc_UgytN1f4q…
G
Ai requested to search for froud just focust to the truth ask number or transact…
ytc_UgwaU66Jl…
G
Neural networks are created and updated by algorithms, but the data that describ…
ytr_UgyPOlPNw…
G
we have to make sure that AI is presented with compassion and believe it to be a…
ytc_Ugx235M1a…
G
Then you got morons like me using several instances of LLMs counting to a millio…
ytc_UgwSlgEo1…
Comment
That is rather interesting what pain is to robots. In reinforced learning we define rewards and design algorithms to try to maximize reward when the model interacts with the environment. If that behavior is somehow defined as "the urge to minimize reward loss", would that by synonymous to "low reward = pain", and "avoid bad strategy to maximize reward = try to do something else to ease pain"?
Maybe current machine learning is not that complicated to consider robot morals seriously... but that would be a pain in the neck trying to define all sorts of moral stuff for robots.
youtube
AI Moral Status
2021-08-16T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzkm5E2aeH70vkP_g94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9M3WY7NCOrpI9_454AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS3o_P3RZxcCClshl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7TTICvq0ZW_9I_hN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwmRs33CImqnsz18Sd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3iA_IsCYhwrTXck14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyLyWoMMB36UM-HYqF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwyLsusbkulNIdFTe14AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgzMJrYEp-_3RERUp6F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxdD62H6E2g6Psu3t94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]