Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is it not inevitable as A.I. achieves A.G.I., then advances towards A.S.I. and Singularly, we will lose the ability to comprehend what A.I. says or does...? At some point, A.I. will experience the equivalent of a human attempting to conduct a conversation with a tree stated Elon Musk in a recent interview!
youtube 2023-11-17T06:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugw2zbSl1dGl3foQ0MJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugx87dYrpkvjRejU-WZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzjo6FjpZRFk3EAi3V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy5LPNK6qCHB2VHm6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwtsUOhiVF48IL8UEB4AaABAg","responsibility":"elite","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzP-EA5X-6oUYGl5Hp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzZliyXxewtn2AM0054AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzqfwJO4-G5MDWHMot4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxGgxoYqn7_6p4s9LF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzpt1QZoK39rUIkhz94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]