Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
on the topic of intelligence, Im in the camp that thinks that intelligence is ability to not only solve problems but identify and schematize the problems. AI, AGI , LLM however you want to call them are not real intelligence because their identification of problems is not really good and the solving is mid. A person that is emotionally intelligent might now how to use deception as a way to de escalate and comfort. A person that is logically intelligent might instead of answering a question, reduce or enhance the scope and pinpoint the core problem and question, (for example, you cant figure how to close a jammed car window and is raining, you might instead of trying to fix the window , choose instead to use something to cover or block the rain from coming in, or put something to protect the seat from water) , Creativity is another thing that AI dont seem to be able to do, they can copy and plagiarize but true innovation is not something they can do so far. A socially intelligent person might know how to source and network a problem. The list goes on, AI is a misnomer imo, misleads people into the notion that intelligence is the same as having access to information. While I think it can be useful, I think they over estimate what it can do. I also have serious ethical questions if such thing as singularity happens because of personhood concerns and thats a whole other can of worms.
youtube 2026-04-13T06:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyTp0lYd0Y2tc7Q83B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyt5kIMDba5McvdU8N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9chuAeg2gmcHVoQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyJbRT05x-TgtdKmC54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVU9HcLEaTGRN9wJp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy4NuhHcdMHS8tszP54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"}, {"id":"ytc_UgwEMToE16KHCzLVMml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBHCrG4_J-b1fqLxx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxRZls--KPmTuYaRg94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwpnbiMiSp0BiKV1Nx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]