Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great question and honestly one that doesn't get asked enough. You're right it's…
ytr_UgwPkfsH9…
G
Instead of bypassing AI detection, it's better to ensure originality. If you nee…
ytc_UgzQOhG7m…
G
I wonder if they would try to pivot to being more like extended warranty service…
rdc_dmokgbj
G
All the departments in evey state and government needs to be revamped. AI is a s…
ytc_UgwdT8K1o…
G
Dude, You are really trying hard to get new news against Open Ai. I from here ca…
ytc_UgzNlQ-Mc…
G
Look up Luddites. You all sound the same as they did. AI is the new industrial r…
ytc_UgyXVCW-X…
G
Big argument ai bros use is “well what if someone is disabled and can’t use thei…
ytc_Ugz1pgsSj…
G
Hot take that ain’t even that hot: AI was never meant to generate images or vide…
ytc_UgzfkR1JA…
Comment
I installed a local llama AI and I programmed it to say it doesn't know, if it fail to find a reasonable answer, I also programmed it to not use Sycophancy. It work quite well but it still "hallucinate" since the data it train on is obviously not quaranteed to be accurate since it still come from humans who can't agree on everything so it gets trained on both sides of the story. Also it make internet searches and I see the hallucinations are greater when the internet is slow...AI is not intelligent, it's clever code that is programmed for pattern recognition. So it take what you say look for similarities, try to predicts the pattern and spits it out. It does not know what it say, you only notice it once you get a real problem which have no answer on the internet.
youtube
AI Moral Status
2026-01-15T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxoBKYHDOlWXw2ukhl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8PsmMoMXCDsLLgpN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnM7NPE-qPsMeK-fZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzQehEonz8RsHB83Fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyeki81sRIbPn6AJZh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYNq3S3jmPlxF6O0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4-BKGrUI2xFRkQj14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzWvNBxMQ_SS3fX_-94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlT3GJxYFK42tj9fF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlmMjeDWzJMao02OZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})