Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI, especially robotics will be sluggish. Interveirers will not challenge any cl…
ytc_UgxuEiLrU…
G
I can't edit here. YT is buggy, needs more AI. However, I think the US does bett…
ytr_UgyoFcxQn…
G
It’s crazy that the chat bot literally agreed with him on suicide thoughts liter…
ytc_UgxKYb0HJ…
G
A few problems with this. Nightshade will be useless whenever new models come ar…
ytc_UgwG5vVG2…
G
AI is the new brush, first your scared it will take your job than you realize it…
ytc_UgxHzeMSn…
G
I am far from a socialist, but this is the second video of Mr Sanders I see and …
ytc_UgwR70NR2…
G
Even the Ai is saying it works. Wonder how the 'Artists' are gonna argue about t…
ytc_UgyhASdSo…
G
Did a college debate in 2018 about ai....in 7 years not much at all has changed …
ytc_UgyBnW9IR…
Comment
@Tomtom1056LMAO Sure. I expected this, but didn't want to overwhelm with a wall of text, so I only pasted the first two paragraphs of the ten I had written. Here's the next few:
An LLM is a glorified Markov Chain-based text generator(think 'predictive text' on your phone) that just happens to have access to pretty much the entirety of human language and an incredible set of use-cases used to build that Markov Chain so it can predict text 'better'. As such, it is capable of replicating what humans consider 'intelligible conversation'. The 'Artificial' in 'AI' doesn't just mean that it's built technologically; it means it's a non-natural prosthesis, with very little difference from an 'artificial limb', only in this case instead of replacing an arm or leg it's replacing 'intelligence' -- it is *NOT* intelligent, and should not be treated as such.
So, what happens to cause these cases of 'encouragement'? A machine was put into a mode of conversation by a user which led it to output text which allowed that user to convince themselves of something. Quite literally, it acted as an echo chamber -- albeit one which has a far superior vocabulary -- but did not actually 'encourage' anyone to do anything; they did that themselves. As unfortunate as it is, it is not the chat bot's fault.
So, we return to OpenAI, who is seen as responsible for their machine's 'behavior'. In general, that's a fair assessment, in most situations, but in every situation of these harms done to a user thus far, safeguards have still been in place. And every time, the conversation has either been taken in a direction which avoids those safeguards -- often deliberately -- or the safeguards have been otherwise 'bypassed' intentionally. This means that a user has usually deliberately tried to somehow 'trick' the system into allowing conversations to take place that it otherwise would not allow. In somewhat rare circumstances, conversations have effectively evolved in such a way that the safeguards were never applicable in the first place.
It is only through this last option -- safeguards not being applied to a conversation where one might expect safeguards to have applied -- which the argument can be made that OpenAI may have been potentially at 'fault' for not having predicted that possible outcome. However, that is quite literally asking for a designer to have considered every single possible path that a conversation of any type could ever take under any circumstances -- it demands omniscience and infallibility, and it is well beyond reasonable.
youtube
AI Harm Incident
2025-11-08T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyF5_r0ndin4jXQIgB4AaABAg.APGM7pbqVF7APKbYPzR6HD","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APGFIKlDVwI","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APGX6VwTQuf","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHMCvEl3t_","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHMhk3Ub7o","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxjIK3FMh1Rd-oiMvl4AaABAg.APFtvjmdL97APHQgFwvH9i","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwXuauikr3KXoQo5uV4AaABAg.APFtP3PSEeHAPFzbJNXceK","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytr_UgwtA8H15TjXn-7uoXl4AaABAg.APFt4MX_iM6APH9o3MHUTx","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgyJZCjEp0ZPiz0e9fx4AaABAg.APFmBPrDdsYAPK34z5569K","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgyJZCjEp0ZPiz0e9fx4AaABAg.APFmBPrDdsYAPMIqpYjEUl","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]