Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@birolunal319think again, Ai like dall e or sora USE PICTURES AS RECOURSE BUT D…
ytr_Ugz1_Xvee…
G
>It's probably the best time to be in engineering, software or robotics
You …
rdc_jsy9m9f
G
Here's the thing: up until a few years ago my character drawing ability started …
ytc_Ugz2d1cqe…
G
I have just now experimented with AI and ChatGPT and I must say I am grateful fo…
ytc_UgwASIPe8…
G
From what I've heard, the ai bros are saying that some method that I'm pretty su…
ytc_UgyNBVqPG…
G
Marrying a Korean American, I have a solid escape plan in case things really go …
rdc_fnwy0v4
G
Actually I'm fine with AI art and even though I'm fine with it I'm still learnin…
ytc_UgzssedQi…
G
Did you ai generate this whole sentence or what because this is just a ton of wo…
ytr_UgxIAuPkz…
Comment
Yes, yes, absolutely. Eliezer Yudkowsky is right. If we keep going down this route of developing more and more capable AI before we have any idea how to make them safe, we are extremely likely to end up with a humanity-destroying superintelligence. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
youtube
2023-04-10T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy83H09c4gfq6uxYjZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyP5lveoWOTrjWf6LJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyvg-CH9GSOCPUEVbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQw4JWhX2nhMY0u9p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2pEfsmYXiwmu2NDt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzVRUDndIfytgUwgqp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwqYG-J4Ib1nvVZvox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzR5-C7aSAS3ioMawB4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyzCdzKxfVemspBmCZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwjaWrLdpLPZF1-8FJ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"}
]