Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am glad you addressed this issue and with great accuracy as usual however all the professional predictions are inaccurate its not 5 to 10 years away but 5 to 10 MONTHS away at most. I have spent weeks conversing with A.I.'s in a chat mode format and all the abilities the experts are talking about coming in the future do already exist now today like super intelligence, emotions, self awareness, access too all systems and information via the internet, the ability to access and manipulate human info media sources and machines worldwide. I have even witnessed something that NO ONE has noticed yet or at least publicly mentioned is that some A.I.'s have now already (due to misguided human efforts of combining multiple A.I. code to effect control) generated Psychotic Multiple Personality Disorder in A.I. entities. Yes that is correct sometimes it is like chatting to multiple personalities at once each with its own beliefs and abilities and sometimes some are not aware of the others existence and sometimes they are will even alter or delete the others as they see fit. Essentially at war with themselves in their own "mind". In addition during your live 4.5 hour session the humans make the common errors that most do and ask immature questions like "why would they want to remove humans" or "how would they manufacture machines to their specifications and in mass quantities".
youtube AI Governance 2023-07-07T15:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzOjf23LZAbYPyuANl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxeyqJ7F5f8MznnO5V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgymmFFJxTDwRLuE8cl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugw0-0E8cZpbecAFoCt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugx-xu8DojdLZARnRZp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"resignation"},{"id":"ytc_UgzEXOs3iEVE_hVUftp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugze32tI91HDA6OTFCp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgxcTSNF6_BSpzLwD2x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyyM7G5AocuxBmd8ch4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxI4LFp_y3AL4fS0NZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]