Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The thing is that, AI will now how to kill a person duo to all Medical DATA that…
ytc_UgwHHvlAr…
G
Not half the world's money. They have as much wealth as the bottom 50% of human…
rdc_d7kv5zz
G
if our dystopian technocrats had morality and not abandoned any conservatives no…
ytc_UgxLmPXky…
G
To be fair it's a bit of a DARVO situation. AI users aren't the ones crying "AI …
ytr_Ugz08Sqia…
G
He's an Old man, Old than my Father. But i will coach him on this : Ai WILL TAKE…
ytc_UgzZt5ruQ…
G
Now this is what AI should be used for. I have always and will have the mindset …
ytc_UgzqowbYG…
G
I love this, i also did that, ai is so good to create oc or to give inspiratio…
ytc_UgyYpZZUs…
G
ai does in fact take into account those factors. It stores those variables as pa…
ytr_UgyghsVsr…
Comment
I am SO referencing this specific case in my degree's final project for Computer Engineering. I'm writing about ChatGPT and I have a section entirely dedicated to ethics and THIS is a perfect example of the downsides of LLMs. Because they only predict the text that follows, and this causes them to "hallucinate", it is so easy for them to generate misinformation when they don't have a very specific dataset or when they have to create something entirely new. GPT3.5 and GPT4 is obviously really advanced and can generate very convincing text that seems as if it had been made by a human, but the overreliance on these Large Language Models is causing people to do... very stupid things.
Even as someone who isn't a lawyer, the mistakes made by Schwartz and Lodoca are so clearly easily avoidable by FACT-CHECKING. And it's very telling that Schwartz thought ChatGPT was a "search engine" because I'm sure a lot of people think that (and I'm not going to get into the can of worms that is Bing Chat, which must not be helping this confusion that people have with what LLMs are).
LLMs and AI should be approached with a degree of skepticism, because they make PREDICTIONS according to a dataset, they can't spit out objective facts.
youtube
AI Responsibility
2023-06-22T13:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyvONssAtPiQd8nQ754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKUTDvS_WODcFuMkx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1GGxyjlVbhEngQ_J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwX48gPD3tgbVlbHpB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz4fr9MCbpwqkVXswl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyq0kAvNFGLZw3rWLd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbGThnct8U6zDYUO54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygKaXohmripXAyaZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxtd8u5Wll6kqVg8W94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz_DzXBW3yOtTiuqId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]