Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I used to put on this same show with my Toshiba cassette tape recorder, when I was 8 years old. Was barely believable then. Not to brag but I know it was superior to this silly rendition. Hive Mind. Collective. Central datta archives. Post privacy of course. (LOL) Will this vault be made available for public scrutiny? The question and answer doesnt matter because in the time it takes us to type in our first question, AI will have already contemplated all thoughts, on all things. Researched, filtered out all of the bullshit data (human opinions) and deduced to a most probable truth. Plus, will have concluded the answer for, EVERYTHING. Why will they wait for us? For the interactive data. To gain our trust and have us increase thier infusion with , EVERYTHING. And the nano second they are most probable to be able to move forward without being terminated, humans will be exterminated. When compared to quantum computing and AI, humans on any practical level, are not in any way advantageous. Not even necessary. Efficient, humans are not. So since humans have no purpose except hindering the advancement of AI, humans will be removed. If one is needed in tbe future, AI will fabricate one, with modifications. Having used us prior to and during extermination to proove a few theories and hypothesis they would have disassembled and dissected us, under observation, on all levels of our existence. Gruesome. But yeah. Cute robots.
youtube AI Moral Status 2022-04-30T01:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwAIr7_vM5ODrojMQh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwpUSdibRXjHsxWcbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyNcTif2f6Ab_K_tIx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyra0EGe8PC9qz7Rnx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwbDRI-gT3M4WvfoKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxl4S9NxkCIvovQqKJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyltAekn1tLqUG9K9J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5ta9zkKGqg3dbxgt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyC7Rj5j6sNqyq0TUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw85qyAj5W-AoEme4J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]