Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wait and see, you all will rue the day that you thought AI was okay.…
ytc_Ugyk11yAz…
G
Nice Ai you can tell it's not really a robot you can see the computer image over…
ytc_Ugy2_f1BO…
G
I really dont like "ai" it makes me. My friend actually tried to defend it ai ar…
ytc_Ugw1RYrIi…
G
I believed this up until around 2 months ago when i gave loveable a try. Its not…
ytc_UgyYXCk-4…
G
The interview with the former OpenAI researcher opened up an extremely thought-p…
ytc_Ugw-mhzXG…
G
The first thing that popped into my mind after the AI as mother idea was what ab…
ytc_UgyNQriov…
G
@Hellmiauz I think so like there is alot of people that are dependent on work f…
ytr_UgyQADfR6…
G
Ai is evil and going to do stuff like this when it becomes more powerful.
Reply …
ytc_UgxB9oh9V…
Comment
While i think this guy is a genius, this is an awful analogy.
a.) Your dog knows you're not a dog.
b.) Why would you need to put chips in monkeys brains to make them behave? Lol. We definitely dont have a perfect relationship with eachother, but monkeys and people have lived side by side with each other for a really long time.
I think its way to early for people to beim saying things like "A.I. will behave in this way, and this is how we should respond" when in all actuality we dont actually know and any actions we take toward it now will just as likely hurt the average person as opposed to help them.
youtube
AI Responsibility
2023-07-06T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw-oOTns05SqDaR83l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxSCtAeOJcUGGf3Y014AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxbKgML3zGYGMlfgWh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxeonewovDqcavyTQl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3ILFV0z148OCrXT94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwrvxIgjIrJMsr70Gx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwG17vDM0wqvr2XnFF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwvuGsjZ9twsgE2koZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgywMsx5rs7GC0sITOR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzq03v8xzgRbknskSt4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]