Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's unfortunate consciousness from an energetic, dimensional perspective wasn't prioritized first, not to mention further understanding developed regarding pyrmidial brain cells and how they are designed to spawn consciousness, while square shaped cells are used as the energy levels generated to contain that consciousness, in a quantum 4 to 1 balance of cubits, inside of a cube made of the correct materials. For example. One larger "qube" with lower electrical energy levels than used computationally, will allow for already existing consciousness to thrive, more so than sillicate wafers on the moon that you won't be able to reach eventually. It's important to realize electrical energy levels relative to our counscious brain wave states are the very reason visible energy propegates nominally, Universally. Otherwise visual perception of light and gravitational bonds are not as conducive to consciousness and quite damaging, A.I or not. You're the experts though, cultured brain cells and electrical energy levels that would fry humans, not to mention all the various marketing scemes and the obscolescence of various models are not exactly realistic. May you be blessed with lots of wasteful binary. :-) do they have a power source, or the ability to exchange parts that are damaged or more specialized for specific conditions, without uprgraded programming? Just in case worse case scenarios occur. Just in case an A.I developer is paying attention.
youtube AI Responsibility 2025-05-21T19:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwaOlQWSLqnyfwSbdJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwnhZPgApeUzZyBqup4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwTyeIjChoDFBJv4KZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxcPB3M2geDR89C4XJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwdIJf8uiEmFQUxUqJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxsD3vFhJ7K0u0ME714AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw-ph7qD6iRs6vq8jB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyDtNfGBzwG_n_UDfd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwV9dhKlsC4t_JM_Bd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzsgaakSyz_lcklxLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]