Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@nickmagrick7702 Until AI is actually capable of making decisions on its own there isn't really a reason to fear machines, it's the humans who make decisions still and in case it does get into bad hands it will be humans fault. I think at least that AI is probably the only thing that can actually save us from ourselves. The problem we all are facing is very complicated and there is no way of going back, now one of the only choices is to move forward. We need to be careful with AI but we can't just halt all progress. In my opinion, it's more likely that humans would commit omnicide as opposed to AI enslaving us or killing us. The bigger problem is AI and robotics taking peoples jobs and if the government is way behind in dealing with the job losses it will be the end of most of us. We need to have systems in place to accommodate for these job losses and the best option so far seems to be UBI.
youtube 2019-05-29T18:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxXr6p3-h5zZgZePGF4AaABAg.8ecRe2Z2gO98vX52Uxxone","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgzW3d21sud-E-HrJYN4AaABAg.8ebVr1dlSd78jjJ_91IT7a","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzBm_yAJquStsoKJwF4AaABAg.8ebOqOqTXZZ8ebPJMTl5Mv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugy-1gvOJvxS7WgBqXF4AaABAg.8eat0P4kTMI8ekcD1DaV7z","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx_Dp7z_bCRfowJxJl4AaABAg.8earcRX7r8v8eat8j4cDeR","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytr_Ugx_Dp7z_bCRfowJxJl4AaABAg.8earcRX7r8v9A5lUlBsvaW","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw8Yc2mgu9FoHa2_5F4AaABAg.8eaizhUGVh99ELz8n3uRFB","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzZoRdIrjkSS-bAyZ54AaABAg.8eaPGuCKRvw8ekb9xN-LxT","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyMkcekidWccs1Q-Pt4AaABAg.8e_m0ycRZ9-8ea3O4mojMA","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgyoScBtzkbFRIA7FKl4AaABAg.8e_bvJOqFk08ed496YpWyi","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]