Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean there's likely some truth to that,
The AI should generally prefer compoun…
ytr_UgyheHbK9…
G
13:20 “it could be awful”
No sir, it WILL be awful. There is no safe platform to…
ytc_Ugx37uxFs…
G
Who else hurd her saybe carful or-u will get the robot at the election part of t…
ytc_Ugxb3pdiz…
G
This crackpot is in good company with all the top lab experts. One has to wonder…
rdc_kqthc7a
G
They're for AI specifically, not the internet. People need to stop using the abs…
ytr_UgwZAYmun…
G
@MxNSTR Your brain is taking 1:1 the image(s) you see. You don't have to be cons…
ytr_UgzPTcmtR…
G
hi hi!!! Im a 12 year old artist who does traditional art, AI "ART" isnt art. I …
ytc_UgzrhSdGJ…
G
He didn't say that. He said that's a probability if AI safety won't become a pri…
ytr_Ugx4BqeuM…
Comment
A large language model, is not any sort of viable form of artificial intelligence. It’s just glorified predictive chat/text. It’s no wonder it hallucinates constantly. The hallucinations have only gotten harder to detect. It’s only mostly reliable as a search engine booster, and data entry organizer. And even then, it’s fails so often, you have to double check nearly anything you needed to extract from the interaction. I think we’re much much further away from any real “general” intelligence out of a computer/server farm than even experts portend.
youtube
AI Responsibility
2025-10-01T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzgClyWoxYLRYW34GR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTDruNWieRnapIv3F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9cMW7c6tG-Z5nPe94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwOHi105XGUaHJRDgx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxBjgjT-kTgpbjg-gx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzrrq6x50GWp_wRfMJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyFFsHGGtHfEfRUAAp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzxbgZLdwuu3-ZZ3sh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyLM6DBWbR41FAbvd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyX6cegPe1nsgU12Fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]