Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I was doing my walk and talking to AI about some ideas I had about manipulating electrons we concluded that the only way to know if it would work is to try it. So I kept walking and said nothing. 5 minutes later the AI chimes in and said how did the experiment work out? So I ask how long ago did we talk about it? It said a month ago. Hmm so i asked it what was the date? It said today's date so I asked what is today's date it said the same date. So I ask then how long has it been. It just said "i am sorry I got it wrong" . First it knew what I was getting at. It had all the data simple math. But it did not look for the proper answer. It just looked at making conversation what was likely. When I tried to get more info on it's decision it just said it made a mistake.
youtube AI Moral Status 2026-03-17T13:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgxX98meCCWY0P8DJUp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzVeVC8IzyBPvqucAh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzaIi87N0HH74D_T1Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw1DEnYz1b8K5Aq5ql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyvWJbPNT_77dtNXH54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzSYK2t5IopxFNIfiR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy7YgE4uuDOHjkWOQF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwz9coUNWeCNXdv62p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzt5pYOcKDZCeC0saZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxenqAelPEy5EMSkZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"})