Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with delving deep into one's own narrative is that you inflate fake problems while preventing understanding real ones, for instance not clarifying the difference between AI and AGI, making people believe that AI is the same as a self-aware artificial being. Yes, AI tech is potentially dangerous but not in the way a hollywood narrative is ( of a AI becoming megalomaniac and aggressive against humans ) , but in the actual flaws this technology have, every AI chat bot in existence today is based on text, it take what you say and after analyzing the request answers accordingly, but given the correct input can make huge mistakes and fabricate data. That is the dangerous part, it doesn't understand context but can produce infinite content, and that is very real and very scary. Our whole society can easy collapse if all our information networks get filled with fallacies and fake info, social conflict will scale to a breaking point, we can not survive without real context. Some the examples you showed in this video are perfect for this, without the real context of what is happening in the real world those in charge of nukes could have ended the world. I know it is just a yt vid for entertaining purposes but how you frame it is important, there are a lot of susceptible people that take this as real and lose track of what is real or what the real problems of our society are. If you want to appeal to conspiracy theorist that is ok, everyone need a voice, but instead of fearmongering about a science fiction narrative talk about the real threat that is governments taking this tech to destroy a country by tainting information and destroying context, the real possibility to make wars by faking and spreading info.
youtube AI Governance 2023-07-07T15:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzOjf23LZAbYPyuANl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxeyqJ7F5f8MznnO5V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgymmFFJxTDwRLuE8cl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugw0-0E8cZpbecAFoCt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugx-xu8DojdLZARnRZp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"resignation"},{"id":"ytc_UgzEXOs3iEVE_hVUftp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugze32tI91HDA6OTFCp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgxcTSNF6_BSpzLwD2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyyM7G5AocuxBmd8ch4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxI4LFp_y3AL4fS0NZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]