Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just as a thought game I was talking with the AI about installing a chip in humans that would give them direct access to it while thinking. Like we have to assume that at some point it will be able to operate at nano level and then evolve further to micro level, like it could learn everything about us. So without taking control over us it could work as a connection point to access information from so 100gb/s. That would enhance our understanding at least a bit. Could also be implemented at birth so maybe it would change us due to our ability to receive better as young and it could kinda shape our personalities or beliefs, a world with at least somewhat more similar worldview would lead to a better chance of coexisting at peace. So many false facts that divides us so it would unite. Not talking about Borg level but scientists working on that level being a person but still connected could advance their understanding together, maybe even be a counter voice to thing the AI might have assumed in a wrong way. If it doesn't know it has to assume to first to even build a theory it can then try to prove, like dropping an apple. It cannot simulate that because then it could be affecting it in a way it does not understand yet.
youtube AI Governance 2025-09-04T16:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxCDFKYmZT55-82t4R4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwRUQDZcwoRtrM7lXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx-HhzWd4Ajl6Kvz8F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxQJETYwN9wi2745Ap4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyPgjXMnj4RtCJOiRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxfa4zIvTVhQqdHhnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzzdqQlscTWF79qfG14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyvSHG_LrcWspcFxiN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzA9bNThU7j8w9wbed4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzPsTICMLA_j8JPfEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]