Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And AI Police will emerge made of Humans that is Highly intelligent and uses cyb…
ytc_UgxooNAoC…
G
KillstreakGamer I agree with you that calling a robot hot is weird... but are yo…
ytr_UggUNuy-h…
G
Ummm they been using tobot and algorithms for years. We all knew this was inevi…
ytc_UgwrhiO4c…
G
This is precisely what every single exec will try to do with AI - reduce headcou…
rdc_mxy3ikm
G
I’ve had some excellent, wonderful profs at Uni that have genuinely made me into…
rdc_jvluaq2
G
@galdoug8918 Yes, I would agree that (the caricaturized stereotype of) promptin…
ytr_Ugwaf2kZV…
G
I agree. Almost every Job which AI creates will induce a boss to utilize AI for …
ytr_Ugwq5w_O9…
G
imagine teaming with chat GPT to make humans better and making us cyborgs. flesh…
ytc_UgxV24L1A…
Comment
im currently taking an ai policy class in university, and one of the things that we are discussing pretty regularly is the effectiveness of the regulation. The problem that we have talked about a lot is that in the places where this actual real regulation (not just some random report that no one is going to read like california is doing), there is no development on the AI. (ie. the EU (and dont even try to bring up mistral it places last in everything))
To use his plane analogy, a bunch of planes took off all at once, and if any one of them crash, we are all screwed. The decision we are faced with is do we ground our own plane to build the landing gear before we take off again. Im not saying that nate soares is wrong about any of his predictions, im just worried that any chance to get real change will be ineffecitve because there will deffinitly be places in the world that wont ground their "plane", so is it really a smart game theory decission to ground ours?
i have no idea
-a scared 20 year old
youtube
AI Moral Status
2025-10-30T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwlf2ytab2xv8i3UI14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzHssYICRCJiDWOUw54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyydrhSRJE9cP4zynx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzfDdBUi-n-XDLG-dJ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuhaUqQ1HoO_RSw4F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5h2D0ojYDwU-jVdl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxleDp9-cBbgdtRphx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKs8NHbh-p4JQlPol4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_NX6TOSnAkRz14BJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzpbUfw6_xbYVbkTY94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]