Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This guy is a proper 'yknow' merchant. How can you have faith in the development…
ytc_UgxKTk8nM…
G
I'm already out of work because of AI enjoying my breakfast beer and staring int…
ytc_UgxtyYtGL…
G
The reporter asked the robot if it wanted to destroy man kind?. He already creat…
ytc_UgzvDKRbg…
G
How can you sit in a car travelling at roughly 40m per sec and not glance forwar…
ytc_Ugy8oBj-X…
G
We'll know GPT-4 found a cure for cancer or unlimited energy production when it …
ytc_UgxWp186x…
G
What have you guys been doing on the website???
I just do normal things with the…
ytc_UgzKPR7hp…
G
Remember my words. AI will overpromise and underdeliver.
It is a calculator that…
ytc_UgxTRy0-r…
G
No, I don't. But I also think that real people are being impacted and nothing is…
rdc_lr7s9aq
Comment
We don't need AGI to have existential risks. All we need is sufficient advanced technology to manipulate us at scale and bad actors to use it. I'd say we have both today. Even in the optimistic scenarios, where AI is used for good, the pace and scale or changes would be so fast that the humans wouldn't be able to adapt fast enough and still be relevant from an economic point of view. To me, that is sufficient to destabilize the human society to the point or wars and going back to medieval times.
youtube
AI Governance
2023-06-27T11:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyEhL4ch47VLdP9gNJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugy54_8cttHpxZSJiJd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoUkud1w7TAbQHNYJ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx4ml_9jq-QphGs3QN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwt2RbzurF3SGpPwPB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8yUV9CM49pTu14AR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8fQDWMBP-0LRsOAB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyPmsCuJ23rvS19wY54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAAEp9lz-G1mKP3sl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxcuDNaybYEsp5vnLZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]