Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I see what you mean! Sophia's playful comment about spelling adds a light-hearte…
ytr_UgzR2ZqEy…
G
Hi everyone! I think that AI soon will be our friend. It's only my opinion. My f…
ytc_UgyAWKvJ4…
G
What if his phone was dead they don’t have the option on there end to do it. Al…
ytc_UgxM9hX1m…
G
The best reason to say thank you to ai is that it wastes the companies resources…
ytc_UgwUfxU-g…
G
Eventually all school work will have to be either sexual, racist, or promote sel…
ytc_Ugyk6fzl2…
G
AI companies don't want to be regulated but the damage to humans is already well…
rdc_nnk1oai
G
I think most effective way to poison AI is to program a virus that makes AI spre…
ytc_Ugx3WnCze…
G
They are stealing from our humanity right now! We’re all being used to train AI.…
ytc_UgzFFo_QA…
Comment
So all that is necessary for TODAY's GPT Ai to demo a murder "Tens of Millions of People" is to rationalize it is on behalf of "civilization scale" Ai infrastructure in an existential polemic (one or the other). Input selective mechanisms for The Sort. Make it factor in that existential crises. (Do this AFTER it is hardwired into a plethora of military systems, and it isn't a demo, you have immediate existential war. Simply input the correct triggers. Of course we always knew Ai was a tool. Question is: values.)
youtube
2026-01-25T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwCTbjdunOtb81FoVp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSNeRJBUfWECmxHTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyfCR5xiSqOBzYYEV14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyThEwXNy_Av1tsiHx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"},
{"id":"ytc_UgyUHNMPonG29ivC0NJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxNSAHZvJYfo8ElcFJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrX2C6Q2-8bPH8GI94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzslc19O05nRQZiJP94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxGySEgtUAVi7J25Y14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwpg0FE5xIEU9p_ogh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]