Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I work with LLMs professionally and I have to call this out. Every output given by an LLM is based upon its prompting. Down to the desperate, repetitious narrative style, the content of these chats is 100% engineered. Without access to the prompt library, these cherry-picked, fear mongering exchanges mean zip, nada. Behind every apparent existential threat AI poses is a human. Fear the endless imagination of corporations to exploit their markets and the bad actors --all humans in the loop. Given enough time, anyone with prompt engineering skills could curate similar responses. AI is a risk, but not for the reasons that make headlines. A single interaction--one query: one response--with an advanced LLM consumes enough electricity to power a 5w LED for an hour. There narrative around AI trust and safety, being centered upon control is inherently flawed. AI will advance beyond the practical ability of humans to control. This is inevitable. By conceding to the language of control in the face of fear, we perpetrate the notion that AI are inherently unsafe. There is way forward and that way is through Human-AI collaboration, a mutually reinforcing relationship based upon trust between human and AI. Quite possibly--and this is unsettling to many--we live in an age witnessing the evolution of a digital form of being. It is important to note that questions about whether an AI might be capable of abstract reasoning or self reflection, or metacognition skip straight over the most essential question of all when dealing with advanced AI--is it in some way alive? Did we understand life beyond the biological sphere?
youtube AI Governance 2024-05-02T12:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz0l5snqOYoVCC0Ixl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydVuwpP4z7s3GRNKt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw9d9AluirYHA-aS0V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyMuC0Wo01-nt3mFAp4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzjtdLtKLyMAYjsq7d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxoy1G7KvED_7aMGM54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyq3Y8EBnPKFWZqFCJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgwdTCi2cqI4fPGKUzl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxX0J9Bb5zJsthvv-x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzRRflbVyAJI7WaBAR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]