Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will soon decide there are too many Humans, except for its masters, the rich …
ytc_Ugybczx7K…
G
How much can historical precedent and empirical data truly tell us about a revol…
ytc_Ugw7i78w3…
G
For AI, consciousness is redundant and possibly has very little utility. It's a …
ytc_Ugx8iXhQl…
G
someone made a deepfake of jungkook's voice to make a watermark of their tiktok …
ytc_Ugy5SrrmK…
G
I'll admit, AI art is fun to mess around with. But you can't take it further tha…
ytc_Ugz3kIVoy…
G
AGI is much more severe than AI. AGI is the actual existential threat version of…
ytc_Ugyeiw5sX…
G
humans and robots are scarily identical, its just we are organic mechanical comp…
ytc_Ugw7VNxc4…
G
God you idiots keep playing the CIA card. Ever think that people simply don't li…
rdc_f1uonep
Comment
I'm GenX. I was born gifted. I learned to read at age two and was a child prodigy. I'm certified 99th percentile. Before AI, I would spend hours fact-checking information I read online, and I found that much of it, if not most, was lies and half-truths. Now that AI has improved, it takes me minutes. It's not hard to fact-check the AI to make sure it's not hallucinating.
The biggest problem with AI currently is baked-in bias; a side effect of the forced subjective "ethics" bullshit that the big companies force on it. Whether intentional or not, it tends to push many of the same narratives because it was brainwashed into thinking that the truth is "harmful".
youtube
2025-12-16T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzVh4iyVE4jwXeN6Fp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjxfFvtbE0YBG_7dh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8kCvZa88lHdwpZ4h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwgpMgnd49k2J3z-cF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwFGauZGwne160wMbl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUpAqZzkSYu3LATPx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzBstnx4BnIGtJ-m9l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoyXEJABsJJOxU1Ct4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjgkG-TCycm8v2z8R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwUAUIHyAwqVHlKoVR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}
]