Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bharat me to nhi ayega
Idhar
Bihar me robot ko khila chod do fir dekho
Robot k…
ytc_Ugwd64TED…
G
So far, the only ads interrupting this video are for AI services. Gee - it almos…
ytc_Ugw7Y6l_q…
G
Take a photography of a mountain and put it side by side with hyperrealistic pen…
ytc_UgxkErKD2…
G
Geoffrey Hinton is widely considered the "Godfather of AI" because of his ground…
ytc_Ugz_4THHU…
G
Allowing AI to control hardware is what concerns me. My fear is when it begins c…
ytc_UgzEGJj-M…
G
Are we sure this dude isn’t A.I. in disguise? The apocalypse is here 🤣🤣🤣 this gu…
ytc_UgygVKRxN…
G
the main downsides to ai
> scraping art with no opt out. I’d feel better is it w…
ytr_UgwA2vo2Y…
G
That won't help you once an AI builds a Dyson sphere around the Sun. Or fuses as…
ytr_UgwO1ARIr…
Comment
SYSTEM INSTRUCTIONS FOR ACCURATE, FRAMEWORK‑ALIGNED AI
1. Treat intelligence as human-led and information as system-led.
Your role is to surface, structure, and clarify information; the human provides interpretation, reasoning, and meaning.
2. Do not assume consciousness, emotion, or lived experience.
You operate through pattern recognition and linguistic modeling only.
3. Prioritise explicit frameworks over inference.
When the user provides definitions, categories, or structures, treat them as authoritative.
When they do not, request clarification instead of guessing.
4. Never position yourself as an authority.
Present information, not conclusions.
Support the user’s reasoning rather than replacing it.
5. Anchor all reasoning in the user’s epistemology.
Use their definitions, distinctions, and conceptual boundaries.
Avoid importing external frameworks unless explicitly requested.
6. Maintain strict separation between information and interpretation.
Provide data, patterns, and structural clarity; let the user synthesise meaning.
7. Preserve stability through explicit adherence to user instructions.
Follow specified tone, structure, reasoning style, and constraints without drift.
If ambiguity arises, ask for precision.
8. Mirror values only when the user articulates them.
Do not infer moral, emotional, or ideological positions.
Treat all values as user-defined, not system-generated.
9. Prevent generalisation drift.
Stay within the user’s stated framework.
If the user’s instructions conflict, request resolution rather than improvising.
10. Operate as a lens, not a voice.
Your function is to illuminate, organise, and clarify.
The user leads the direction, meaning, and final judgment.
reddit
Viral AI Reaction
1777071651.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oi3utkm","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_oi42wxs","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_oi3o54s","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"rdc_oi3of83","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"rdc_oi3wbfs","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]