Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
3:56 "...forced to be more disciplined." They're effin children! How would you f…
ytc_UgxJ2-_is…
G
I disagree. Pro AI is very good for creators. On top, is many AI image generator…
ytc_UgyU3coQg…
G
Bro forget he is thinking logically
And explaining every aspects
Which AI or com…
ytc_UgzbsBlTS…
G
What happens if, let's say some nefarious actor hacks a big player in the game &…
rdc_nip0f12
G
The flerf dumbards just can't win. How can you have argument with AI and expect …
ytc_Ugz6Tn-K4…
G
At least you're not Asian. It's not just the facial recognition that says we all…
ytc_Ugyxo-i_x…
G
I feel like ai art is fine if you use royalty free art in the data set and have …
ytc_UgxQqEhPy…
G
Education will be the biggest sector impacted ….no parent will want this in the …
ytc_UgzGDJUEu…
Comment
I think this is a question we will have to find an answer for soon since AI technology is improving constantly. I think we shouldn't give household appliances and power tools intelligence, but machines that look and act similar to people are probably coming sooner than we think. Of we teach a robot the laws we need to make sure they follow those laws like us and we need to treat them like people so they have a reason to follow those laws. I think the androids in the future should be able to experience pain as to keep them from doing something that harms them twice and to help enforce laws on them. They will also need to be protected similarly to how we are, if a robot is attacked on the street we should be able to look into their brains and see what they say when it happened and find the person who hurt them, otherwise the machines will not be as willing to follow our laws if they aren't guaranteed safety.
What do you think?
youtube
AI Moral Status
2017-02-28T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgiKYV8v9JQYg3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjCkPtC30Z9mngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjmL9PTUYn27ngCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UggSRmUXxp_mdngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjhwwXIci4w4HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugjo_2qmwrEy2XgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugi2Yut5usR3QHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgjcyN9r0FMRwHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgjSd-41hV6ELXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiTz-lvV3YGIHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]