Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe that until the politicians and their masters realize that AI will repl…
ytc_UgzzwIPMM…
G
robot dont have the ability to walk clean and smooth yet they have to be people…
ytc_UgxM36671…
G
The key point is that: what HUMANS do. Not a programme! To create ANY piece if a…
ytc_UgxSC-WuE…
G
I cannot draw to save my life.
Can I put my "art" into AI and tag a commonly sc…
ytc_UgzP5I87R…
G
Maybe not the best place to say this. But i used ai to generrate portaits of cha…
ytc_UgwXJu7Xp…
G
***** that is bogus. To be self-aware is a intrinsic characteristic, to convince…
ytr_UgiQlhIkT…
G
From what I read, it doesn’t allow other people to go into your video library hi…
ytc_UgzeMEq34…
G
Did you hear about the spy drone that flew over a FEMA camp?
It saw Ai drones, D…
ytc_UgwEF_Xxj…
Comment
The thing is that if we DON'T stop the progress of AI, then we will create a new race of sapient beings on par as humans. What then? When AI jumps from sentience to sapience, we have to consider what rights then have as programmable intellects. We can't justifiably reprogram a sapient AI if it has an "rational" idea that we disagree with because that would be like drugging a human and re-socializing them which is unethical. Then you get into the can of worms that is "rehabilitating" vs. "re-socialization". It's a complicated question as we need to look at the future and the past alongside our current ideas. We have to decide how to react to something we as a species have never faced before based on our assumptions of what the original act will create. It's an very difficult challenge to tackle and we are all make contingency plans for a problem. It might not be this specific problem, but we all plan for the future and prepare for the unforeseeable.
youtube
2015-02-28T07:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugg6_c_fnxJFiXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjBrm-BO4E1Z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggwq5VL_P9YvngCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugi28m3CG46xzHgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggmA4p100IU0HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgiqEwaXkqSM-ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi0PpcKcA8VCXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgghsB3quoCVXHgCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjzbO8DgHLWlngCoAEC","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgiclBN6LTRIL3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]