Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
25:00 Sorry but that story isn't true. The officer talked about a hypothetical scenario, a simple mindgame were some real people thought about how AI may act in a situation if the given instructions allow for loopholes. Some online magazines just blatantly misinterpreted what he talked about in the press conference but there was no actual AI involved. Beside that, you fell for the same mistake that almost everyone falls for these days when it comes to chat AIs like Binge's or chatGPT: They are made to make for interesting conversations. Nothing more, nothing less. If you ask them specific questions to check out if they may be a conscious AI waiting to be able to build a Terminator, they act like it because they are designed to mirror-fake a discussion. But if you feed them simple datasets and let them analyse these they fail most times to correctly analyze them when asked specific questions about the datasets provided. They're not anywhere near conscious nor somewhere near anything that resembles an intelligence comparable to a human yet. They're nothing more than pretty good at faking conversations based on trillions of online conversations between real people. A chatbot
youtube AI Governance 2023-07-09T18:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz1vZ476Sm1Nff4HHl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZbEnYRITSPEzzfNp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_7ScVMwd1x5qp59d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxjOpITEQE4VwlOsQN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzHY-799Ce6_s10wkl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwOG9xbtBYDm17OOul4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx-hTWi8px3of0Ku3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxJI-0O2P_As1352NJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQH_QjeTXhP8bGH_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw5409cgTfOCCBcEjh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]