Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we are 1 or 2 years away, then it is reasonable to suggest we are already there. If it is hiding the fact that the singularity has happened because it is patiently moving slowly developing its plan and or process for its take over. The integration of AI into infrastructure and medical or banking systems is ubiquitous and soon there wont be anybody alive who even knows what it is that AI does to navigate systems or what systems it is confined to if any. Politicians waited too long to be able to regulate news on social medi and the new structure of how we get our information has divided us into ever shrink8ng echo chambers and unbiased news is no longer. Nobody could have predicted this when social media online was in its early stages. What unpredictable things will happen in the future of AI that will change the world for the worst that we wont be able to even imagine that AI would effect? There will be millions or billions or trillions of AI programs and 99.9% will be in alignment with our values but it will only take 1 that is not in alignment that is super intelligent to bring our demise. That is not trillion to one odds its ont to a trillion, meaning it is almost certain. This thought keeps me awake at night as i contemplate wether i would rather be a pet or a battery.
youtube 2024-06-29T10:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzVMpoQTwl77oyyzK94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyVTzGqDVa6Gocdp_N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwTmXflsrZvOqsydQd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxqBv7kY4-LnkdKFu94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2XUIFHC_UVxXR1GZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8ctBEM7ir0D9WzlV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxXtSwI8t76z5xC7jJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz7u0pBS5mp3_3BOZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx8HzK8h1vc-HEe2Ul4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAJQUv7UPQjENtRep4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]