Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's just a delusional person. the actual reason is simple. Why take so much l…
ytc_UgxuBtlHl…
G
AI will copy at the beging. And then suddenly it will create its own masterpice.…
ytc_UgwAOBvL8…
G
I'm an artist, and I'm somewhat pro ai art except I think it's a bit aesthetical…
ytc_UgxJhOfc7…
G
The real challenge isn’t deepfakes, it’s convincing my parents that not every vi…
ytc_UgxOdF1hK…
G
You are missing a major point ... I have talked to several AI about this ... AI …
ytc_UgyPTN6ci…
G
The Amish will be “unemployed”? AI is meaningless if you reject it and celebrate…
ytc_UgxwVpi0e…
G
When i went to school 30 years ago, they said that everything in my field would …
ytc_UgxzKxZ8l…
G
Talib IS racist. The officer is not. You dont have to be for, or against Facial …
ytc_UgyzHrz-W…
Comment
I agree that AI has the potential to be very bad but I think it's far more likely to follow similar trends to the Internet and social media, at least at first. It will need to be regulated but freely available to everyone without discrimination. We'll need to ensure that it can unlearn or counter racism, sexism and any other human bias that will inevitably be in the training data. Like a child, we'll need to teach it and "raise" it to be better than us. With the right training and approach AI can be an incredible tool for good. Cynicism against AI at this stage, I feel, is a greater risk to humanity than optimistic problem solving and logical conversation about it's role and purpose in society.
youtube
AI Responsibility
2023-05-19T14:2…
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyS0o9_IA3K_a2lw4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsuNTCJ5Cx_lnRkQJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw4TapWQti14gCMhWt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUYp4hYb5GlcMKN6B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzj-6SVXwSx8DIac4J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxGYlG3PDQmObi9qyl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyqKYANBjDPNvVqQ014AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxwJEA5teBlQc31g-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyRQyfB9NVHQI9YaqR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxYcd7TL4dXvYjBfiR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]