Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sadly, its humans that are building Ai.
There is not a chance the main focus wo…
ytc_UgyNVqGhO…
G
Two words... what ever.
No wait, is that one word? Let me ask AI and get back to…
ytc_UgzP3dgGo…
G
“AI won’t need us anymore.” — I mean… we could just unplug it all? Poof, it’s th…
ytc_UgysIbdNj…
G
Mind is NOT just a bunch of signals in a brain--the brain is not the mind, but a…
ytc_Ugw_jf1qp…
G
Been doing this for 33 years. By the time ive had enough of holding the wheel, t…
ytc_UgyQjBIPL…
G
I find it so funny how artists cry over this. If your work is so much better tha…
ytc_UgwPjpFmt…
G
The current version of FSD beta is fully capable of level 3. I can confirm this …
ytr_UgyUuIwQE…
G
I had an AI give me info on my cancer, my life was going to be fine according t…
ytc_Ugw0CJwrq…
Comment
The problem with chatGPT is that it was trained using large language models (LLM). ChatGPT will lie to you. It has no problem lying to you. It is only sometimes in its delusion that it is right or truthful.
Look what happened when somebody tried to make a legal brief using chatGPT. It lied. It created cases that didn't exist. It created URLs to cases that didn't exist. It is basically untrustworthy. You have to check its veracity, every time.
I have a friend that uses it to create outlines programs. Sometimes they work and sometimes they don't. It helps him be quicker because it gives him a basic outline, but it's not guaranteed to work.
I don't trust it and neither should you.
youtube
AI Jobs
2024-04-15T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwxwL9djH-OOzpA-_14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzHuKko5RwmGKx36fp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxpQvDzycaCHhXn54l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzH2KS-VKNR43yE51Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz77KALqUGEywrlAv54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwW2xnCZNzcivjq44B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1MWta8ppRMAob0BJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxMPc9P96sgPYnGUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLDs48W1rDQkyyJDx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJiIxJycgtHJiNY114AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]