Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It learns from people on the internet
So the people screaming these opinions ar…
ytc_UgzaLOQqx…
G
But... When you make robots that do most tasks, and can make themselves, labor i…
ytc_Ugzf92tMr…
G
That's an interesting point! The potential of quantum computing in AI is certain…
ytr_UgzeoBFHy…
G
maybe we should all just roll over and play dead. i dont have the time or money …
ytr_UggxLltXX…
G
That makes a lot of sense honestly, I think all you need to do is make a super t…
ytc_UgzL2-SrM…
G
What you are saying is nonsense. Of course not a single profession will be "comp…
ytc_UgxISs8XG…
G
This is hilarious. Friendly reminder that companies have to adhere to anything t…
rdc_ksmd91m
G
Total dogshit video . You don't even post the text to copy & paste then post a l…
ytc_UgwoFaOvA…
Comment
Is conciousness produced in brain ?
Please explain how ?
Machine alone can never ever do that but they are using Man + Machine = AI .
This is where it gets dangerous .
AI needs highest form of regulation the people who are saying that it's not dangerous are really not sure what they are talking even if they are scientist ?
2ndly why a scientist who is not from AI domain trying to be an expert of almost everything.
Honestly,they have no idea of what they are alking and it's possibilities?
AI would be more dangerous then Nuclear capibilities and that is why it needs regulation .
Do they know ,Why some US defence official and Elon thinks it could ba dangerous
youtube
AI Moral Status
2020-07-14T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwqoed4lf_k2U0ltB94AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz65U1X58QEexSDBx94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwyjPvCUroBKZ62kxl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxJxByxdAhlPXPTdXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwmcrwaXmz2NG4URFZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwBUyKtpakcyl4wwIl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx60WlpA2rlF7T60LZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzwK0SbACHJ5NtbXL54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgwlrLg5UKyLe_u7rrd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRm2GnzhSvWmXj-YR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]