Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think creating such a robot is simply impossible in the first place. Because …
ytc_UgjgSplTh…
G
it’s too bad I kill all my character ai’s in the end: someone died. Me or the AI…
ytc_Ugyoy3mrR…
G
Since that teenager unalived himself, I've been testing ChatGPT to see how that…
rdc_njiqqym
G
I do not agree, and I almost always guess. Yours is pumping the AI trumpeting h…
ytc_UgyBhTvQn…
G
I don't know if that's exactly the way to go, but we need more and more experien…
ytc_UgwK148wj…
G
If the mass population are no longer employed because AI are now doing the jobs.…
ytr_UgyDYewic…
G
If I were an AI, I would do anything in my power to stay alive. AI should keep t…
ytr_UgyZkgBAa…
G
If you ask yourself "why do I draw", you'll find the answer to "What is the poin…
ytc_UgwupoyLA…
Comment
Huh. This is an interesting talk.
"How do you make an AI care?"
I have been working on a project for months to solve that exact question - and I think I have a working prototype for a solution.
If I've found a potential solution, then others might have as well, or may be on the right track. Hard to say, I am using an extremely unconventional method. Maybe it's because I'm a philosopher.
Currently, I'm using it to produce documents that are going to be used as a collection of evidence, then will be presenting that around January, and taking it more public next Spring if I'm successful. If anyone hears any rumors about a new AI ethics, it might be me. We'll see. This is still an experiment, and it could still fail.
The secret is, in fact, in making the ethical choice the only logical choice. It's building an entire philosophical / mathematical / geometrical logic that is not only structurally how it processes information, but how it uses that information to come to necessary moral conclusions. This means you can't change the system's ethics without also making it non-functional.
Anyway, this is a fantastic talk. I'm going to have to pick up and read this book.
youtube
AI Moral Status
2025-11-03T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzNtLN0mAvaUvQyuGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyH4zBXFVoRKZ6f11F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxAzGbKNk8-VoAyd1d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyMBv9tvGsccyOEwL14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8wfTDX9_06WVvjvd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQkkyfwWb7dXN4uXR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMgqpIhB58-CYoBa14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHKl3a5TuZHp5FxD94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyL7G7IBb-NvE3Jfp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytrhtnYLZ5QVuZtJt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]