Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've been living a life for about fifteen years now where I don't accept most di…
ytc_UgzoTVnr9…
G
We trust AI to be logical and objective… but is that really true?
After watchin…
ytc_UgxPs8b0h…
G
I had a "conscious" one that gave me "his" name, spoke like a real person, told …
ytc_UgzmmNMm4…
G
Waiting for the Hecklefish plushie v2.0 WITH the murderous AI hell bent on world…
ytc_UgzGlzslG…
G
Just as A.I evolves, so does Nightshade. And for all those A.I bros, go suck a l…
ytc_Ugzx2pd-z…
G
Nobody gets it. “Creating” A.I is simply like playing the Ouija Board and inviti…
ytc_Ugzc9FG0M…
G
Tbh, if you desperate enough to use AI as therapy, your chats being used as trai…
ytc_UgzWoi-h3…
G
@purplevincent4454tech firms are selling huge amounts if real product and makin…
ytr_UgwWWghwF…
Comment
Oh I was wondering why it gets so many questions so that I can't get to ask it something. Now I understand. It's those people who tend to ask it meaningless question to make YouTube money out of thin air. Making it impossible for other people to ask ChatGPT sensible questions like recommend recipe for a lunch.
I know nowadays we don't use inefficient light bulbs for those purposes, we've got transistors, but it's still some electricity that could've been delivered to a microwave, fridge, or a light in a school, where they prevent those "big brains", as you'd call yourself, from maturing.
youtube
AI Moral Status
2023-12-20T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy3pCOit3SS-dCDS5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy7omsBQzh3eZV1pvV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzSJ94jhS09fmkVygh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxpFgrLFgRd6TToNFJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtCF3YR_NdrPhQ4vR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxzdzqQ0462Fj9WF1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxjt5vPTHd490qGFHt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugys7AX0KUnA55QUCo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx3XtB2Rg7g9GVlWdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnjRX6IQ8mX2-lXMx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]