Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How to make "AI safe"...isn't this a moral imperative? Blaise Pascal talked about the limitations of logic since logic starts from the presuppositions and ends where metaphysics begins. The reason AI is dangerous is because an alogithm is working with no control or no visible control over the presuppositions...the inputs. Therefore you can have great logic but it is the inputs which are determining the outcome.
youtube Cross-Cultural 2025-11-04T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw5PRNLiXttqUmcfKl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzzwuCusA6iydid_l14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFq_4pD5SbFDUdR1d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzNR3tUloTCoQE_wJN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxCVIqnZ0J3q_rBG6d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzzkMnFqnf_98rm4xN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7PmjDBQjlDbZT1_N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy7EoXqL2e5FlFIIWp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgysNujVVkQY09zHsZl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwktZipjXoZdhQkggd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]