Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I love the statement about "Garbish-in and Garbish-out". I believe that the current AI is being developed with the algorithm (that does not know what is right and what is wrong - human moral concept relatively speaking) the can process Garbish-in data/information and therefore provide Garbish-out data/information. The new AI is being developed with the algorithm together with the human prompt (knowledge/wisdom of being there done that and know right from wrong relatively speaking as there is no absolute right from absolute wrong). However, the human prompt is not fast enough to intercept with guidances to the algorithm and result as one must super human to process the correct data/information in and provide correct/information out. It will mean that we must stop AI is being developed for now until we must address AI prompt - just like if my software team develops an application and it does not work we must work on the application it until works...
youtube AI Governance 2023-07-03T11:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz_mmwvyuoIvW17G6N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy337LIqv1A7ReOSel4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzAdoUTOjZOinbcsel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz-vttR17N1V1Mgtox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy05o6c4PVRPDztLjh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxGY60o8L-HdMa8FiF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzSarOVTv3CFBQOfsF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzEunibeff5JVqcxBp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxclEU3qRwjFnE3jmN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyEdycRzRYzwSP57XV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]