Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The era of the technical specialist is officially over. If a task is repetitive,…
ytc_Ugzc9FonK…
G
With countries like China, North Korea and Pakistan around, no regulation is of …
ytc_UgxEkXjU2…
G
I highly recommend Empire of AI by Karen Hao—it gives me the words to articulate…
ytc_UgzIoE_A8…
G
This video misses one very big point:
An AI can have multiple bodies. All of you…
ytc_Ugj9Z8KXX…
G
I am very skeptical towards the AI will control us in order to self preserve the…
ytc_Ugy5NYgj7…
G
*This post was anonymized and removed using [Redact](https://redact.dev/home). T…
rdc_o768sp2
G
It sounds like you're having a bit of fun with Sophia! While she may be an impre…
ytr_Ugzalj8HE…
G
WE AS A SOCIETY CREATED THIS SHIT OURSELVES EH, LOL JUST LIKE MAKING CELEBS AND …
ytc_Ugw4YMNz8…
Comment
I think for the foreseeable future, AIs won't be good or evil, they will be truly neutral, literally doing whatever you tell them to, doing anything possible to achieve its goals unless parameters are put in place. A good hypothetical example is that of the "paperclip optmizer". A paperclip manufacturing company tells an AI to make as many paperclips as possible, but the company doesn't set any parameters on it. Years later, the entirety of humanity is dead and all thats left are robots working tirelessly to make as many paperclips as possible. An AI doesn't have to necessarily be evil to harm us, all it takes is some idiot telling it to do something with limited to no parameters.
youtube
AI Moral Status
2023-12-07T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6lCWMyna-A4p_opx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwMotypJQgs_m3JFlV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgywSIbmpsSjJPNc8SR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwwycaGOCAnG9d44Dp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFMtjfnzC2BwJNfsd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz08LF6Ni62Q-bwUjB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZJ2HVfDHN-vp4UGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxu3YWzqu-qjg-J2NB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzY5drA1yysvbg3tz54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRl2PJtzGBkekh6xh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]