Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most people using AI are just lazy. They don't want to learn, and they use this …
ytr_UgxhPYR0a…
G
I kept getting wrong number calls for over a year awhile back from an elderly wo…
ytc_UgzfrCu2a…
G
Hot take, but admitedly, it's not like information on art is incredibly readily …
ytc_Ugwqy3OX0…
G
‘The Age of AI…’ by Kissinger et al provides an interesting primer on these issu…
ytc_UgzZQJ1Cm…
G
14:30 my gods at the point just have imaginary friends, it is the morally ethica…
ytc_Ugz2N0Iub…
G
then the question is, if humans are doing work that AI could be doing better and…
ytc_Ugy1ykdqp…
G
@rmariboe It will be a process. Driverless cars growing in numbers and capabilit…
ytr_UgyihgR7C…
G
The only way to counter your boss is, become a boss by using Ai, but this time i…
ytc_UgzykwpHR…
Comment
Personally, I think that Asimov got it right in that you should first figure out how to hardcode the ethical laws, and then maybe ramp up the intelligence. This could be a great point at which somebody could still do this, since the LLMs certainly seem to be smart-ish enough for many practical purposes, if only they weren't sometimes psychotic, unstable, or otherwise unreliable. Although it is probably a big ask of most of the tech titans who quickly made big money by mostly ignoring ethics to understand that a smarter AI that doesn't understand ethics is a worse product for everyone, especially anyone who would try to call themselves its "master".
youtube
AI Moral Status
2025-10-31T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCPxQcs45GgqHN5Y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyAfgnhe-tnpWXlI_J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJGjiEcj_7FjRqzA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzwsMCz6xxrEJJWjip4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyo2fYeARTFmm-KYa94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxzUybvak1HsrTstUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz_Ecn_V3ULzuK8AtB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwyI_Sn7LYvDBW6_fh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyiyJc_zuTmGzz1Y594AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwKimS4luJJTkK3rAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]