Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a near perfect example of lucid, as well as calm, respectful exchange of ideas. My concerns about AI/LLM is less about the technology behind it, but more so the willingness of human nature to defer our own critical thought process to others and/or some “thing” else. Whether we are overwhelmed by information or just lazy, we must still maintain responsibility to think for ourselves. Critical thought with fair measure of skepticism is (and always has been) key.
youtube 2024-09-24T22:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw_oUTPvkZvUAZMXTZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyGNxViQrjfKVm5Xh54AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxrSoegJLOrLjMbETR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgztXcGp-UN5SM4jOcF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwfsZ6pjqXLST2vITB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxBe0m_iWajDVlhKCd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxRoBwCsf06xKUaizx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxYCKspjr1Jq-ed-8F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw77KYugVb7DzzFtH14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7qB6VjNJiwEdJP1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]