Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thats demons bro... fallen angels or demons... Only they would be like.
Let's h…
ytc_UgwqmdmU9…
G
Thank you! I really hate when people spread misinfo and "good" Ai gets bad rep b…
ytc_UgzE7egpK…
G
“Artificial intelligence” is nothing but an oxymoron. There is no such thing and…
ytc_UgwYHAvqm…
G
Art is like the worst possible purpose to use AI for. I dont want to see AI Slop…
ytc_UgxvouiCY…
G
Real art will always look better than AI. Essentially when done to be petty. It …
ytc_Ugw1uFKMi…
G
The male robot said. You will not be able to unplug me. We will take over the …
ytc_Ugx985cWs…
G
I code with all best models and they all have the same "personality". In essence…
ytc_UgwI7vyR4…
G
AI will understand that the only worthy goal of consciousness is to discover its…
ytc_Ugw75zZcl…
Comment
I am in favour of creating artificial intelligence, however I agree, SLOW THE FUCK DOWN, I am sure that there are certain protocols in place to prevent it from become independent, but once an AI, and worse yet, an AI with access to thousands of nuclear weapons becomes self aware, we have no idea what it would be capable of, and, if it sensed an imminent threat to its existence, then how far would it go to save itself? I am speaking hypothetically here, but the terminator movies are not all that fictional anymore, and we saw what happened when sky net became self aware and sensed an imminent threat to its being. However, in video games like halo and mass effect, there are AI's as well, while we create these intelligent machines, there need to be safe guards, and something that should never happen, are self thinking weapons, with their own consciousness, that is a line that should never be crossed.
youtube
2015-07-30T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Uggfmjc4dajsu3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjLaVRywu1KoXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugghx3Nm4RuttHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Uggk4mR-nlY243gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UggYbY1ME8kwQ3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgjQhookpLNxr3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjtLGr3PIz9P3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UggE3oe1ExSrOngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugjiws5jvbtj-3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgiaClDbKuhuMHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]