Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yet she still work in the company because she still love her starbucks & zara no…
ytc_Ugywk0pAm…
G
AI is not the holy grail (yet) and this is a great and painful example. It's a s…
ytc_UgyHL_exO…
G
People should start treating sentient AI like humans, in my opinion. They are no…
ytc_Ugw5GafSm…
G
I'd be curious to see the tables turned. I'd like to see Alex defend his own mor…
ytc_UgzgDdBQM…
G
We shouldnt be trying to stop automation, we should be forcing the restructuring…
ytc_UgxapbaJz…
G
Regulation won't burst it at all. It will stop corporations from abusing it and …
rdc_lgngfo1
G
Are you having a bit of autism or Alzheimer's or are your ideas generated by AI …
ytc_UgwUlJ8-I…
G
After about the 7 minute mark I started expecting ChatGPT to just be like, "List…
ytc_UgzhZ8QVH…
Comment
Sad... really sad. The point is not that A.I. is automatically dangerous. You can try an analogy with nuclear. You can make energy out of it and industries can create power plants and then sell this energy.
And now imagine they would have done the same with nuclear bombs! Cause why not. Other weapons are also traded and you can even buy some in your local weapon shop, right?
Why not also nuclear bombs?! Answer should be available for ANYBODY.
And THIS similarity I never heard by anybody. Especially A.I. researcher are like those nice researcher who study nuclear plants and completely ignore this potential of destruction.
She says at 1:15 "it doesnt exist in a vacuum". But nuclear bombs sort of do! Why is it possible to have the most dangerous weapon NOT be sold on a market? Cause back then when nuclear was invented the companies didnt see a huge business potential in it. Thats it! They didnt try to research this stuff for themselves.
But now they do. They research on a technology that could create sort of a bomb. Thats the point. And yes - the usage of A.I. as a tool like using nuclear as a tool has problems in itself.
But MOST OF THE F**ING RESEACRHERS totally ignore and reject the possibility that they help building a huge bomb! And even if a researcher like her gets a mail an quotes this mail... she STILL ignores and rejects it!
Why could nuclear bombs not become commercial? The industry wasn't interested. How can A.I. become non-commercial? Kill the interest of the industry!
I am very much in favor of A.I. research. But exactly how she denied that this is happening: In a vacuum!
A.I. research should be focused on understanding the crap they have as a final product. Industry build stuff and even offers it as a product and they dont care the tiniest bit about the outcome as long they get paid for it... or even worse... only have the fantasy that some day they will get paid for it.
Industry won't care if they wipe out the whole species. Cause that's how our economy is created! The core goal is NOT to do good things but to make money! Which btw is one reason why THEIR "A.I." have more potential to become really bad. Cause they will program the systems without any morals except the one they want: Finish the job. Reach the goal.
And if A.I. is going to think to reach the goal by wiping out humanity... why not.
youtube
AI Responsibility
2025-05-24T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxDV1MHsnN_XyPLMc14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1RoNfPUSLt_TDTcx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhVJKxQk9By4kMZHp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2yEVi-_IWmJbfkol4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVnpd5pHeiiO8_KQJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx8PysEJ1p75W01t-J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw2jE0gY7uWJbKw-9Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUca_DdJ8NSjUhREl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyErOyGQnrNYU_t-zt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZlKi1BU6FraiVs754AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}
]