Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When AI could have been used to explore new medicines / new areas of science / c…
ytc_UgyM4QOys…
G
7:36 But AI does not exist between queries. And when it "exists" while resolving…
ytc_UgwI1SJdG…
G
What if 'the goal' is bringing humans to present stage 'BY AI Aliens ' from begi…
ytc_UgxGDOfTs…
G
AI (as it is build today) will only ever be as good as the general value of the …
ytc_Ugy2EmF8V…
G
The phase "AI Art" is on its own is Nonsensical. Art is creativity, it's a gate…
ytc_UgzHkEdZc…
G
A question, what exactly does "Character" AI Chat mean? Do you tell the AI to ma…
ytc_UgwjqykeL…
G
Can’t say i fully support AI art and artist, but how can also support the modern…
ytc_Ugw313JaQ…
G
I am ashamed for the children that this AI "Artists" once were, it's just sad to…
ytc_UgwpvTRjg…
Comment
I think this is the kind of doomsday we are talking about: that AI with its subtle features destroys our societies. Not so much that it pushes a button to shoot a nuke. The key question is: what to do about it. And I think it is in no way a bad thing if some people tackle this problem by starting with the most solvable problems.
In my view, the big question is how we limit the proliferation of dangerous AI without throwing away all its important benefits (e.g. by prohibiting it altogether). The almost completely uninhibited implementation of AI we currently witness is certainly not the way to go. But we also need a lot of social science research to tackle some of these problems, which would delay AI quite a bit (probably decades). Meanwhile, AI can be a lifeline for some people, for example by scaling up educational resources for underserved communities or solving tough problems in medicine.
youtube
AI Responsibility
2023-11-06T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxwPElZVXSeRa-4WrF4AaABAg.9wmQmxxQMAY9wn9JXVYSLd","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzGZ-KfndIdSmbe9HR4AaABAg.9wmPqx_6m5R9xLnCPeVbHa","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxutE1MxXKGjVrKaaZ4AaABAg.9wmNOBAm_Te9wnezbZhhyT","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgwHqeUcFeWwY_BeopZ4AaABAg.9wmFg-qbKG99wmqth3s71x","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwHqeUcFeWwY_BeopZ4AaABAg.9wmFg-qbKG99wp8MXJMg2A","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwHqeUcFeWwY_BeopZ4AaABAg.9wmFg-qbKG99wpREnqDVY5","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxmiDdYRLsKvrPR9nd4AaABAg.9wm9XQLn7-H9wnElBDtxrK","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxmiDdYRLsKvrPR9nd4AaABAg.9wm9XQLn7-H9wnb2kzWOS7","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxmiDdYRLsKvrPR9nd4AaABAg.9wm9XQLn7-H9wokmNKBcGk","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_UgxmiDdYRLsKvrPR9nd4AaABAg.9wm9XQLn7-H9won39cv5Hk","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]