Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It hasztwo scenarios ... The blade runner scenario or the terminator scenario looks at those two movies for what could be. But thexsirld will be recked in both scenarios. The limiter for ai is earth's resources. Ai has to go to the solar system to get more which ai alongiing to change can. Civilisation is going to change no matter what. Middle classes will lose all work. Ai limit is access to real world. If it manages to get humans to do tasks we are donned. We need to trace all ai instructional items so that we know the difference. If human instruction it's ok if it's ai it could be bad. Ai still needs humans. We meed to identify everything that is AI generated.. to survive so we can stop it. If we need to . We are at the moment the gate keeper to the real works and ai only exists in the virtual world a safe distance. Ai should never be able to create companies as these are the core vehicle to create activity in civilisation.
youtube AI Governance 2026-01-26T07:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgziQvlqc2yTA8IAROR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwK65UEaebBtX6HK1N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxYktEcmkAKWoNgkJN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwsYKxS4n7TO1ExGxJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzslie877F3k5anq6x4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyl3EuC1ndYutyvC8B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugylc20EAl8lJ_u2MPx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwoYLR8i58a34cE8yR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwA8nhz79b_AV9aiHh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxcqLMdPfIAIh0KG2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]