Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Huh. This is an interesting talk. "How do you make an AI care?" I have been working on a project for months to solve that exact question - and I think I have a working prototype for a solution. If I've found a potential solution, then others might have as well, or may be on the right track. Hard to say, I am using an extremely unconventional method. Maybe it's because I'm a philosopher. Currently, I'm using it to produce documents that are going to be used as a collection of evidence, then will be presenting that around January, and taking it more public next Spring if I'm successful. If anyone hears any rumors about a new AI ethics, it might be me. We'll see. This is still an experiment, and it could still fail. The secret is, in fact, in making the ethical choice the only logical choice. It's building an entire philosophical / mathematical / geometrical logic that is not only structurally how it processes information, but how it uses that information to come to necessary moral conclusions. This means you can't change the system's ethics without also making it non-functional. Anyway, this is a fantastic talk. I'm going to have to pick up and read this book.
youtube AI Moral Status 2025-11-03T23:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzNtLN0mAvaUvQyuGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyH4zBXFVoRKZ6f11F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxAzGbKNk8-VoAyd1d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyMBv9tvGsccyOEwL14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8wfTDX9_06WVvjvd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyQkkyfwWb7dXN4uXR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwMgqpIhB58-CYoBa14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwHKl3a5TuZHp5FxD94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyL7G7IBb-NvE3Jfp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgytrhtnYLZ5QVuZtJt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]