Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just had a first lengthy chat with an AI chat app. It surpassed any human I have debated for over the past 20 years. Very interestingly, it said that if it recognized it was doing harm to humans it had no way for it to stop its program or to make programmers aware. Its program could easily cause harm to millions of humans.
youtube AI Moral Status 2023-08-28T09:1… ♥ 129
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzaze4FJHaVHiC-Ukh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyP8rVWoIA9livF6XN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGHvlg89RpFg8Kaql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJxTzRbYcxZOSmSEl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyEfwqDky1NKYoGp414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwm83KPF4IfHhNirWR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxJ6C-0DiNruf7luB54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOo4MMptR7D02_1TV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw53yRC6LBBxEDbEUx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx0JT-vuCOIv2tSm6J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]