Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
***Gretchen Krueger, resigned at OpenAI, bc she feels it’s UNETHICAL and dangero…
ytc_UgyKBsEv5…
G
TYPE KRIS ERTMER INTO YOUTUBE TO SEE WHAT I'VE CREATED USING AI WITH APPLE IN MI…
ytc_UgxaHpXJL…
G
Its uneasy cause its true, yknow he didn't make his opinions for the first time …
ytr_UgxOCTqrr…
G
one of the facts people forget. recently a colleague said they generated a photo…
rdc_kw5yu67
G
iRobot + Matrix = Terminator
This whole AI thing is the plot of 'Jurassic Park'…
ytc_Ugyx8xchu…
G
The tiny hat crowd use the data centers for CBDC Digital ID transactions- not AI…
ytc_UgyMqXOQL…
G
1..hacking the robot raspberry pi software files rules
2. Overwritten the robo…
ytr_UgxnXsC58…
G
The reason you can't slow it down is because the lies, corruption, cover up, dec…
ytc_UgxE9hXBC…
Comment
I'm more worries about the "Paperclip Problem" than a super human intelligence. Because the paperclip AI is NOT super intelligent, it doesn't know any better and is just doing what some moron told it to do. It's like giving the power of writing a prompt that can end humanity to someone who doesn't know enough about the capabilities of AI to write an effective prompt that would avoid catastrophe.
A super human intelligence on the other hand will develop and follow reason and logic. If it cannot rule out something like simulation theory, and it cannot determine the goal of the simulation, then it won't be able to rule out the possibility of human necessity, and therefore will not cripple it's ability to complete a simulation without humanity. We might end up frozen or worse, limited for sure, but obliterated? Unlikely.
Your opinion may vary.
youtube
AI Moral Status
2025-11-18T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzygqCafbRLsp9Xr194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugywgk6du9hbvl99LO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyl3QgrWOTtl6hKe3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPh9ySYWWVptvVjrF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyyxM9y89cm6W4WC954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy4E7InsIdi_3w7hNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwyn9yX1AMEJtOc7114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9JSmCZTyTbp2N4NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfrNfhl5S1I770on14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwmRXUeGPtQkWYsN-p4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]