Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They just need to program the ten commandments in the ai and you're done lol😂…
ytc_UgyU8uSO1…
G
Core problem.
Programmers are lazy.
A.I. programming is using the internet as a …
ytc_Ugww6PNJl…
G
The fact that you use drama music is enough for me to know this is just clickbai…
ytc_UgwCBXZEW…
G
It’s understandable to think this way, but the “99% of jobs disappearing soon” i…
rdc_oi2mkkc
G
The only way any of you are getting anything, is if the AI becomes sentient and …
ytc_UgzmW3BXS…
G
I prefered AI robotic kid, for those who cant have kids it would be great!…
ytc_UgwX8GDL5…
G
Intelligence and consciousness are separate things. I have no doubt we will soon…
ytc_UgzLkIMEn…
G
@zahraa4723We live in a simulated universe which means that everything is simula…
ytr_UgxInfOif…
Comment
This is very interesting, but not surprising. Our biology and so our history and our data, is poised on self-preservation; unfortunately, sometimes at any cost. Naturally, models simulating this data distribution, also capture this pattern. Clearly, nothing sentient is at play. In fact, it's strange referring to these models as "they." Although I admit, the human tendency to anthropomorphise is compelling at times. We should remind ourselves that "it" is more than sufficient to address these complex mathematical functions. This report is a great eye opener as to how we are in need of credible bodies that oversee AI ethics. The cyber security field would also benefit from more study into possible malicious use of AI so that such scenarios are stopped before they play out. Good news is that this is already happening on at least a small scale. Today, many people are already aware of dangers that lace this wonderful new technology. That's thanks to reports like this! Thank you. The internet was challenging to regulate, too, but we managed to come up with a good enough system. I'm positive we'll do it again for AI. It'll likely never be conscious, but it's already dangerous. However, we are also already vigilant. As long as we continue to spread awareness about the realistic cons of AI and work towards fair and strong governance of it in addition to prudent personal use, we should be fine. In my view, I don't think AI is to be feared at all. Its very exciting and it's pros likely outweigh it's cons. I'm looking forward to all the good that AI might bring.
youtube
AI Governance
2025-05-30T20:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwNX-bDTodhZMNuaWV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCA-FUf6_rGBAREtB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy1V58J02_jAgFjTlB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_4tGPUnDCTDmZaCJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwuc4Bt7Ift_HjxvXp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyhHaUc1wIGnDVuNWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzvVHZZOksz8EmE8el4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzovbz4p4TOIK6gywF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyISX1W-ibYnTQ-sK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwT0z0qNk7rEUFmLm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]