Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People who is thinking it is just an technological trend is not understanding wh…
ytr_UgxN9O16B…
G
it's a human trait to give an answer despite not knowing. If AI avoided this, it…
ytc_UgyshaDX-…
G
How about we just avoid programming a robot to be sentient... Then this doesn't …
ytc_Ugjesjn2d…
G
Been leveraging AICarma to optimize content for AI responses; it's really improv…
ytc_UgwgrgoTK…
G
Great video as usual. One thing did bother me. You kept using the term “auto pil…
ytc_UgzOo3s7d…
G
I don’t think AI art will ever fully replace human artists. We as humans, care t…
ytc_UgzGfYE5f…
G
Ai is highly overrated like most of the hyped up bullshit that is being sold. Ki…
ytc_UgyRRZUen…
G
Would you pay for a portrait or upload yourself into chatgpt and frame the respo…
ytc_UgwjQK_Ap…
Comment
I've had a number of long conversations with ChatGPT. It's impressive how quickly it answers, but less impressive is how often it's wrong, as can be shown with any subject matter expert. It seems to be programmed for speed rather than accuracy. Also, when asked to analyze complex problems which humans can't solve, it falls back on, "That's a complex topic which requires a multi-faceted approach to solving." But it can't suggest anything beyond the talking points of its leftist programmers. In other words, no insight, innovation or out-of-box thinking. In short, if it can't solve real-world problems, what good is it?
youtube
AI Moral Status
2024-02-03T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzH3eV-vMvFMuYpyKt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwutkFJyZvBa98QfyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwePP7wEEjaj5FhCu94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUs39XESIdujKjsdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiLU-pqLWvHqT1ZUd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy3gKGdGYdwTruSGwp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYAJYdn5NT-lflNcl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzRpoAPij8VrjygMlp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBTGWWOiLIsdIJq1x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxUIy_DAVlNoi9zaKR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}]