Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for sharing your thoughts! It’s definitely a fascinating topic. While AI …
ytr_UgzH1kll7…
G
Regarding the second aspect: AI isnt permanently storing any images. It takes th…
ytc_Ugx4T1v9j…
G
We should be looking at the overall use of AI and robotics. The corporate bankin…
ytc_UgwXrLza5…
G
1. Sure, if social media dies, it's good riddance—except it won't die, it'll mor…
rdc_le6itnc
G
I ASKED AI ABOUT IT CAUSING AND COMMITING CRIMES, AND IT SAID AI CAN AND DOES MA…
ytc_UgyW2suBp…
G
@carleqq yeah there's a ton of bots in this comment section. Tends to be the ca…
ytr_UgzxF2u0L…
G
I understand the semantic (lack of) nuance you're trying to make. I disagree wit…
rdc_ks8xz7k
G
yeah, they told me that AGI will come in september 2024 lol
and the best AI toda…
ytc_UgyuMJsnc…
Comment
Hubris is the best word to describe the discussion from both sides here. Research that needs an advanced research lab today, can be done in a garage in 4 years from now. The assumption that this can be stopped is like some amoeba discussing the prevention of the Cambrian Explosion. The only way the 'future of life's institute's ' paper could be implemented, is to change the entire planet into a 1984 dictatorship, stop all technological advances and put surveillance cameras in everyone's office. Otherwise, stopping AI advances now is the perfect breeding ground for bad actors that won't adhere to any regulations.
The only thing that could actually be done, is to immediately stop the development of autonomous weapons of any kind plus the removal of computerized access to weapons of mass destruction. But unfortunately even that won't happen.
youtube
AI Governance
2023-03-30T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzASlp8xvkKLA95au54AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5bfc0_J5GLJwT6-14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5K3Ftiu0-zOhXalx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9cjNm7u399PO-j1t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw-vi8ABT_vGgTEHxR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx23vNYG4QT1CT7W494AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyi1Lo14GZVMhgC8FJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy02rW0r0QdELN_1rd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYuAIFIwjaxNbSlHh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzSYQ5kwRwa2_xnDbZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]