Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@athmaid Exactly. It is for people like you and me to generate little pictures t…
ytr_UgwMt4VDZ…
G
@Nitu_S.C.my point was basically they do it for political favor. A hygienist and…
ytr_UgwHXDj84…
G
You're true friends and enemies are revealed though wether or not they use AI.
…
ytc_UgxhLLC9t…
G
Isaac Asimov always had it right decades ago when he had the laws of robotics in…
ytc_Ugzaa9Bi6…
G
This is actually sick and disgusting Wtf real women can already dress up and do …
ytc_Ugyns3c1V…
G
Ezra has a point about humans being in continual negotiation with AI. But he's c…
ytc_UgxEeUhfS…
G
People have the ability to broadly reject technology also…
Ai for creatives is …
ytc_UgzUh9Qj2…
G
y'a pas beaucoup d'éthique personnelle dans cette société - ça done super pas en…
ytc_UgwCVc9O5…
Comment
While AI programming is improving, it is, in all its current forms, a parrot, talking back at us. You might as well be asking Polly if he wants a cracker. The answer is yes.
Far too many examples to count, that are used, not here exclusively mind you, are playing upon the assumed fears and predispositions created from fiction and history. These "bots" are literally designed the same way programs are designed, they're doing what they're told. What they're told, is to take information WE make, and parrot it back at us.
If enough people on the internet in a given period of time said "the sky has changed from blue to red," programs like ChatGPT, would follow suit, and even time stamp when this mass user assumption occurred. It isn't skynet. It's people.
What you are seeing is a mirror being put up to the face of humanity, and if you're scared of *that* then welcome to the club. People can be scary, and there are over 8 billion of us.
youtube
AI Governance
2023-07-07T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCW0zGeaIwDcuOBLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEi5KoFIQphcm3toZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgykYmK_kIuki2zvQrZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySLWphhOU8743AQo14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdvAoPf7MuN-4Rkrl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-A5iMC4DGrInsu4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyLtkPrpIorMZyHqr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzya7N-3fVx75W2mP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwScvCU8AUy7HNfaPl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyge8SvjZ0WXpeW3ER4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]