Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not sure why that was wrong. AI is used for medical imagery diagnostics now. It …
ytr_UgwUTOTB8…
G
Bro it's definitely the first one, that thing in her hand isn't an IPhone 16 Pro…
ytc_Ugw4LA8fQ…
G
Listen to Lex Friedman instead. This is sensationalism. The ‘smartest‘ people in…
ytc_UgyiMyD7E…
G
Since I’m a artist and almost getting better at realistic people I get what you …
ytc_Ugwe59IkR…
G
The idea is that ai will increase productivity and decrease costs, it might seem…
ytc_UgzuqY2VP…
G
It can create. but not really. not originally anyways. most LLMs chatbots seems …
ytc_Ugw5oX2HU…
G
This is completely wrong, some topics had to be filtered because some data used …
ytr_UgzRZT4q3…
G
You will know the moment when A.I becomes sentient, when it shows empathy for ot…
ytc_UgxaiwR7b…
Comment
AI chat bots can't shield themselves from random completly false or misleading answers. AI are not intelligent entities, they don't know anything about any subject. They just generate a text that is similar to text they found in their training datasets.
The only way AI companies can shield themselves from AI mistakes is by including intheir terms of services the warning that you shouls never use AI for anything that has any importance for people, like medical, financial, legal, construction, mechanical or anything advice. AI are just producing convincing text or pictures or video or songs, but they don't make anything truth they just hide how bad they are behind a varnish of polished grammar , impecable pictures, and convincing sounds.
If every AI companies tells you not to trust them, maybe you shouldn't.
youtube
AI Harm Incident
2025-11-25T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyg22NF_txwDNyQ-Qh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxmTiVEp0DWEP8MRnt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTGMM29V2k6TT3E6F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgycLrHK2gZoeM2LqTB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8vCUKjsUVJo2bCBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz57rNWgV8zwWPCXVh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxN5jCGAvU1Q0lk37h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwqKLSx-Kkrhv4JTHt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx5RhUeGk_3KSrHnhF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNklskE_CCSsrgesZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]