Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Heard a thing over on Benn Jordan's channel: The AI art bot trainers are getting…
ytc_UgzjkapHu…
G
i saw someone from over watch and hey kurzgesagt ARE WE GOING TO DIE CAUSE OF RO…
ytc_UggS6u_4h…
G
I'm a software engineer and this is wishful thinking.
People are often incapable…
ytc_Ugy8veV5s…
G
We need robots to replace weird dressed clown nerds like this guy. Please have t…
ytc_Ugy2CPO7q…
G
That's makes no economic sense, if billionaires wipe out the working class that'…
ytc_Ugx77jEdR…
G
AI can produce a lot of admin and business risk. In the back room it has been u…
ytc_UgyPkZplP…
G
We are the Arkonids. Our descendants can either look forward to their self-infli…
ytc_UgzooGnkV…
G
Great interview — the guest is truly exceptional, someone who stepped away from …
ytc_UgwlgnQAC…
Comment
Although I knew what ChatGPT was, I erroneously assumed it would have access some of its training data in some manner (such as where the data came from), so I asked it for some information I was looking for, along with sources. What I then did was try to find said sources, to double-check ChatGPT's information.
The information turned out to be correct, oddly enough (I guess the training data must've been sufficient for that), but the sources were non-existent. That's when I learned that ChatGPT can just make up URLs, book titles, and so on.
It never even occurred to me to simply rely on whatever ChatGPT wrote without checking if it was true because that (blindly accepting "facts" that an unthinking, unfeeling algorithm gives you) seemed stupid at best, dangerous at worst. This video/case perfectly demonstrated why.
youtube
AI Responsibility
2023-06-11T20:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxglSmG-wd9lj3WtPx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx2rcpJGtMXzyYPEvJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyKn1pAEPwTN_YD114AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyQU4_eHvJPpmFQSax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugys7amIpYUHbqXMaSR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxvWZY0Vxl4ELascb94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzD8QV4OOphfvDAFf94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxffdE3BfUoPgqW6P94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy_D-R3mG7GYyJZYwp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyBKcmCgg8cl5lHDCp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"ban","emotion":"outrage"}]