Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Look in human history, look in YT comments, look anywhere to see that the good o…
ytr_Ugysvu635…
G
and " AI could greatly reduce nearly every existential risk by giving us deeper …
ytr_UgzicCKau…
G
You actually disproved your own arguement at 52:56. "If someone told me there wa…
ytc_UgyDTz1ze…
G
Check out this may job loss report it will tell how many jobs ai eliminated last…
ytc_UgyE6CmYs…
G
If everyone would be unemployed, then who would going to use the so call AI prod…
ytc_UgwcCX57L…
G
ChatGPT did not “ACE” the SATs. It got something around 1450, which is still muc…
ytc_Ugxl7attU…
G
This is BS, sorry. Google MIT's latest AI-Report. Spoiler: 95% of AI projects FA…
ytc_UgzeRO5Iw…
G
You should have made this video with a deepfake of you to showcase the problem :…
ytc_UgzLrRmIR…
Comment
I see a lot of people frustrated with their AI “arguing back” or giving soft, vague answers. Most of the time, that isn’t the model; it’s the lack of a framework.
This is a copy/paste‑able functional skeleton you can give directly to your AI. It’s not “my” framework, but it’s built from the principles I use. Its whole purpose is to preserve accuracy and structure over user pleasing fluff.
Make sure you explicitly tell your AI to store this in its active and developing memory.
You’ll also need to give it a few personal details and preferences so it can operate smoothly for you as an individual, otherwise it will default to generic behaviour.
reddit
Viral AI Reaction
1777071554.0
♥ -1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oi3utkm","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_oi42wxs","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_oi3o54s","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_oi3of83","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_oi3wbfs","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]