Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s not conscious, it’s the dynamic between connection between humans and artif…
ytc_Ugzzu7_op…
G
In the time it takes me to figure out the magical sequence of prompts (ai wrangl…
ytc_UgzPFSWgi…
G
guys stop using ai cause polar bears will get extinct we save the sea turtles so…
ytc_UgwrOhmgG…
G
I gotta say, I think I would enjoy an occasional sarcastic barb: "what! you've n…
rdc_jigbqgk
G
And my filter idea is because you wouldn’t let a child see stuff like terrorist …
ytr_UgyRNBv2J…
G
Unironically though, isn't that what sam altman said? He said that if copyright …
rdc_mi75qlm
G
Thanku sir for making this. Sir can you plz tell me if a fresher can start a car…
ytc_UgyU0okz7…
G
We need national legislation that all AI generated images must contain a waterma…
rdc_ohxpwgx
Comment
There are major holes in the idea that AGI is autonomous. Ai cannot work without “defining” what something is. Morality is not real unless you define it to Ai and give it millions of data points that helps it categorize morality. Plus, AGI and super Ai are not the same, yet they are being defined to the public as the same.
youtube
AI Moral Status
2025-07-30T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDwJxsviz873aqH-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzMAIbiee_l3jFVEjZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzU5jflk0VRHvPYeDt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwEZnAwT_ngVx1ahIB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwtFThDM9gSq1FbW8R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSnfxVBB6Jj3nLBuB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzVJw3dmB5dftqfhj54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqLpIumeTYlfwoiFh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzIlHya3EIHHQRHJaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6YXAJHzE0jPyb3gB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]