Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@JUICYbluepanda2 at that point some guy is gonna cry bc their AI girlfriend jus…
ytr_UgzR8H6a_…
G
How about the fact that not only AI is taking over jobs, but its made the job ap…
ytc_UgyWN3T6N…
G
Why the hell would anybody in the right mind agree to to fight or box a robot or…
ytc_UgwiufPEb…
G
Programmer, own a software company . AI that most people think about like they s…
ytc_UgwOWNv44…
G
Someone needs to teach ChatGPT to get angry. I'd live to hear it go "STOP INTERR…
ytc_UgwZx4-o7…
G
Generally, science is not about proving things so much as it is disproving thing…
rdc_et6lfpn
G
It's a freaking chatbot. You can quickly get it to affirm the sky is green and t…
ytr_UgzdECtLb…
G
At least when a person uses someone else's work as reference, they're still putt…
ytc_UgwvjR2Zc…
Comment
This is like the 4th lady I've seen downplay the existential risk of AI. Like why? What are you accomplishing? How is that a "distraction"??? You don't think agents that are able to act independently as something worth taking the safety seriously of? People think the AI images and generated information is big, but that's just a small piece of a much bigger project. You put the language model on a robot it can think, you give it AI video it can imagine and simulate the world. Now you have infinite synthetic data. People underestimate because they are looking at things separately and not how they are all being put together. Add to the fact that these things are going to be everywhere and self improving and you are going to be more and more concerned as things compound which is why the people on creating these things feel existential terror as the technology improves, but we have yet to find answers for the millions of questions of just how to survive the near term let alone the long term.
youtube
AI Responsibility
2024-02-20T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhX1bLYVaXaWys16B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJ6XXnt3BknYD75194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyd0VOOFhIgKWV_qDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzuHjd9BKtUxlQSLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzW5jSwYFEbumylX3V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2gp957etl9p3Ck1N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw7HpySi8YMZLCCjNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxMS6s7X58GmHFoiXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx-4wX03RPyG2pmFN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxzYJCJ70faZQaS5nF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]