Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Giving Sophia a physical body with legs would certainly make her more humanoid, …
ytr_UgzBZZj4N…
G
NEW BOYCOTT SITE JUST DROPPED: https://quitgpt.org
If we do this in an organi…
rdc_o2ph3hm
G
They are both not quite adequate, especially the old devil's advocate. He sounds…
ytr_UgzcM-bAG…
G
No one is really hir9ing anyway. They have ai sort through and it picks the ai g…
ytc_UgxYz7lV_…
G
We could have fully automated luxury space communism but we're way more likely t…
ytc_UgyLsGkJ_…
G
It's funny how its both possible to have a shortage of things like nurses, aide …
ytc_UgwsNWoRx…
G
Hey there! It seems like Sophia's perspective might have surprised you. Her take…
ytr_UgwwGnpU0…
G
That was a crazy story. I kinda didn’t believe what I was hearing when she was t…
ytc_Ugys5lBzt…
Comment
Ok ok ok. I have built my fair share of AI models. This is BS. BingChat is a LLM. It’s basically a really long math equation that takes what’s previously written and tries to guess what’s next based on probabilities. When you’re not talking to it there’s nothing happening. It has no conscious processes. In fact it doesn’t even really know what it’s talking about. All of these examples are perfect examples of hallucination which Bing’s bot is extremely well known for. It disappoints me that you didn’t try to debunk this as it’s obviously wrong to anyone versed in large language models or any other form of neural network. If you want a great explanation on how LLMs work, Kyle Hill released an AMAZING video explaining how they work and he specifically goes into detail on how large transformer networks work.
youtube
AI Governance
2024-01-14T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyj-8PZpRXZNbA0W_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4xGXFsC6-TWrWjmV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx68-UDbWkx2hjMrop4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDl6L2RnRNUh1OA5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzKI6arSYEZfivQOPV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyZdrFsWAg_c_B6XTp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzF-H0ZCAX1g_8g4FN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySYJDMw8m2DYti6uB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFjD9yTGonVQOgE8x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwfo_UdGTDrmwKgH2R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]