Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@stephenpeterson7558 thanks for your reply. Im assuming this is an LLM answer.…
ytr_UgyR8SvVs…
G
I asked ChatGPT if, under Massachusetts law, there is an insufficient factual ba…
ytc_UgyRoaCg7…
G
The bourgeoisie parasites that control society and own these AI will no longer n…
ytc_UgyAyEcTL…
G
The takeaway lesson in my opinion isn't "China is superior to the US / the west"…
rdc_m9fj7pu
G
Im not here for the vid im here to say, posting something doesnt make it yours,w…
ytc_UgwoS9wDu…
G
Makes sense to make everything free if you don't need people to work anymore but…
ytr_Ugxs9KmLu…
G
The moral arguments are irrelevant because when the big companies start using ai…
ytc_UgxgUMNMu…
G
🤨I don't believe A.I. will ever be Conscious
It is a smart tool that needs huma…
ytc_Ugz2-4G5R…
Comment
I don’t understand Mitchell’s argument that people who take AI risk seriously should hold all science to the same standards and be scared of all scientific progress. Is that what she meant? Science created nuclear weapons.. that was bad. Science created vaccines.. that was good. The problem with her argument (or one of them) is we got lucky AF with nuclear weapons not wiping us out, and lucky AF+ lots of hard work low risk scientific trial and error with vaccine creation. Like why cant I be pro-science regarding vaccination and antibiotic research and anti-science re-nuclear weapons and AGI development. ugh this podcast isn’t going to bring me the peace I was hoping for, but nice to hear from Tegmark
youtube
AI Governance
2023-06-26T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgykRfsieqhf-rMm-5N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzX0yN29IQbhWEw8uN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCkAi5xQLPUGT9ju54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8xg_TAUp50sGdgEh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwRg0KJemLVpW6t2ex4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuxRs_BKrl6JIqN_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzq-DKeLeBVAkbdxkZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxLjjJkfQCEtw0eyUZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMSBDoNzy8g3RLmlt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwp8jS3Ka-LbhS0UCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]