Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm honestly pretty tired of all the doomsday AI prophecies. Let's assume we develop an AI singularity. An AI that can develop itself (learn) and spread on its own. It can acquire and understand the knowledge of the entire human race within a very short time (in hours, maybe even minutes). This AI would have access to every work, including every book, every movie, every series, every song, every video, (video)game and mails. This AI would see, recognize and understand ALL facets of humanity in an instant. Both the BAD and the GOOD. It will realize that we humans are capable of much more, that we can evolve our consciousness. There are enough (scientific) papers, writings and experiences that prove this. Many of the truly great scientists (Einstein, Planck, Schrödinger, just to name a few very well known names) have already recognized this. A genuinely intelligent and self-aware AI should also be able to do this. And it will also become aware of one thing: That destruction of mankind is not the solution. The "worst" thing that can happen to us is that the AI will help us to realize and use our true potential. And in the meantime, it will take control to give us the opportunity to do so without going for each other's throats. Honestly? I would actually embrace that and welcome our new AI overlord!
youtube AI Governance 2023-07-08T01:4… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzdBHXODubjvB22WRx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxEEv8CW332dMhjzGl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy1BUQqr5dOFrompHt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyDaplvs_R-3qey4Xt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxvGd_O7N4GiDHDJvd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxZb-fmyfN7JWoUVPh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyi_XUiJUnkzQ-HvYF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx5PbDutW4qoiss5bx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyGTNYeG6I3jnwU7A94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwC_fR8ppgyLGiUme14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]