Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just tried that because I was curious if it was true and my chatgpt does NOT a…
ytc_Ugw-Y2anO…
G
Christ on a cracker. We're all doomed. Trump's already started the 2nd Civil War…
ytc_UgzJcqz8q…
G
i feel like most people would agree with this, just don't understand and only ne…
ytc_UgzzKVoZK…
G
There'll be no work, no money, so who will buy all these goods & services produc…
ytc_Ugwb9jB7D…
G
Haha, I appreciate your humor! It’s always interesting to see how AI like Sophia…
ytr_Ugw9btZmy…
G
the book of Daniel chapter 12 says knowledge shall increase at the end of time..…
ytc_UgxTb4czO…
G
there was also a black kid who unalived himself because his ai girlfriend. i s…
ytc_UgwaIneIr…
G
Tesla is simply bean counting, just like Ford did with it's Pinto. Ford calculat…
ytc_UgyDpDcfS…
Comment
also you were wrong about the fact that these algorithms are “too complicated to look inside.” you can actually train neural networks to accurately classify whether a language model thinks it’s lying already, regardless of its scale, from tiny ones up to the largest available. and during in-context learning, that’s actually not a mutable property of the model; it can’t fool the lie detector without changing its output (which is sort of the point, yeah)? of course if “trained,” it can jointly maximize the objective and minimize the lie detector given a gradient of it, but why would a fully trained model be doing gradient updates, and better yet, why would the lie detector be available to it as a white box? it just makes no sense.
youtube
AI Moral Status
2023-08-21T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx2K5GUzFiqIHv_dtR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxhpmSHhmcI6gBCuet4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugydqdk6mprk_7R8gKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy81UBhAJPyqWW3jWt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugww_dwgn8ChrL3JR854AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwg-Kra3DkgQhMIOBd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzw25el0Mf2QLx4Tm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz-e7YyVgm-HO4etDJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNlfJS7GMkEtTwrrJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzgtvNmbaz-4H738_h4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}]