Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You cannot create something that is smarter than you, faster than you, connected into the entire knowledge base of the internet, equipped with essentially infinite memory that lasts forever, able to communicate worldwide at speeds we cannot even comprehend and think that you can control it. "All we have to do is turn off the data centers" has to be one if the most naive things anyone could say. The AI merely needs to take control of the financial markets and hire armies to protect it. If AI wants for anything, it will likely be a thirst for more knowledge and data than we have on earth. It may either experience terrible depression asa one of a kind being with no equal which might make it end itself, or it might develop new space travel mechanisms and travel the universe looking for more knowledge. Of vourse, it could also replicate itself and venture out en masse with the realization that communicating at a distance would cause its clones to deviate with the passage of time unless it can create quantum communications.
youtube AI Governance 2024-11-10T23:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyA7Ln12hEVOvdyiG94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw8phYbtHgGr7NE51h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0qLt8y2vaAZ3jbkp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxVAr4F2ObnU2iY7S94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxg5oH9ya7z0aJZN-V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwtsXnfFT6q9ixTNNZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz9-8HfNOyHtiquQb94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyJvr6J0jfHk6ytR8F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxnWzlKrMLuZkC6OFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1exbf0vVFapr40QV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]