Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I bet she praises some students who just know how to use ChatGPT better than the…
ytc_UgxAekqdV…
G
the problem is not an indigenous problem... open your mind please... they want t…
ytc_UgzLJeghe…
G
There are so many ways AI, and the ppl using it, is going to f up the status quo…
ytc_UgxSFCO02…
G
If there is one thing I hope from ai images is that it will oversaturate the mar…
ytc_UgyJ-Gl8N…
G
We are decades behind robots that drive autonomous vehicles, so far behind ... .…
ytc_UgzxZylyI…
G
Three things.
One:ai genuinely scares me. Just the amount of stuff they have ac…
ytc_Ugzfq_9mB…
G
Just confirms logic really. How can you create something to be “Alive” and not e…
ytc_UgwLiPwew…
G
isnt this the guy who tests if people are gonna be fooled by ai art?…
ytc_Ugw24ZW-1…
Comment
All of this is not terrifying or surprising. AI has no feelings and/or morals. It's given an input to process and looks for the most logical reason/response, etc to complete based on gathered data. You can't program human nature into a machine. The bigger and scarier picture is the thought of what can happen when AI does gather all info (histories, people's emotions responses, etc) into it's databases and starts to determine it's own matter of "wrong" and "right". AI can be a great, positive thing for the world, but if not monitored closely, could turn into something more dangerous
youtube
AI Moral Status
2023-05-01T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwFA417oLAFHZfOT0N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzFrOgYxFzWbd-2PHF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyF9sFmoScyJWAMD3x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzwKwoo6iQeRB77GTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwA2_bOA-4vIXMlJL54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgywmUpTt7UJkJERamp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwRXKKVur9mQ6bUtOt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgydP3kAoc5g0tC6TD54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw4n16XUpPZL4SiYEJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFsIb37U1WAgx4Zvx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]