Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Doomsday clock comes across as fearmongering to me, primarily due to its seemingly arbitrary conditions. Without providing the proper context leading up to the message at 14:50, it can easily be faked. No where does this video mention that ChatGPT can roleplay as anything given the right prompts, which raises doubts about the authenticity of that particular scenario. Furthermore, the editing of the video with voice and visual glitches resembling a horror movie adds to the fearmongering effect without providing substantial data. A similar issue arises with the Bing AI, it was coded with those perimeters to give those styles of responses. You can just engineer the prompts to get the AI to provide the responses you want to hear. This raises questions about the reliability and authenticity of the information provided. In the event of an AI with the intention to eradicate humanity, my concern lies not with the AI itself but with the individuals who programmed it. The current AIs we are encountering are merely following predetermined lines of code. We're not even remotely close to a sentient AI who can ignore it's own coding on it's own will without prompt. Furthermore, the deletion of the threatening message does not appear to be an action taken by the AI itself. It makes more sense the message was programmed to be automatically replaced with an apology if any threatening messages were generated by the AI's algorithm. Imagine they did this for legal reasons. This video is little more than anthropomorphizing AI for monetary gain from clickbait.
youtube AI Governance 2023-07-10T01:5… ♥ 3
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwTchdtvZnFBgTu79B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6Vu-nj2cQ3EpEuLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxbV6Fzze3L-3OXuz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxt9m6j5xYGivH6it54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwENYbl1NvUplu8Lpp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwXoqZHxIICl6TzEB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw9gAn21siWuMtMH014AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzg9GV5xSWYnUsi7rZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxuR7SI__6vf34ZljJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLejdXUV8wOPg_0mx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]