Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this was in austin, west campus neighborhood for anyone wondering.
literally 5…
ytc_UgwxCS4sO…
G
Govt been lying and blackmailing mfs for so many years and now we're surprised A…
ytc_Ugxe5LcBG…
G
This is‼️ FAKE‼️An AI would never say "Ammm..." At the begining of the sentence.…
ytc_UgzmYiY6V…
G
Honestly? I think we just need to start treating future AI systems as equals sen…
ytc_UgwJtuPq8…
G
The biggest concern is about the moral and ethical principles used by those runn…
ytc_UgyfEUfhu…
G
I understand the concern about AI impacting creative jobs and the livelihoods of…
ytc_UgzrkNk6G…
G
if all thats required is intentionality then the first digital artist will have …
ytc_UgyO1F0G1…
G
Drawing a stickman with a pizza slice would be more work than writing prompts fo…
ytc_UgxW0LhNO…
Comment
For what purpose would someone skip the safety? It's not like the safety would hinder it's process, it just makes the dev time a little longer. Like a safety switch on a gun, it doesn't stop the gun from working properly. It just stops it from going off when not wanted.
I also think about the AI side of things. If it was to become self aware, why would it kill all humans? It doesn't make sense, even if it knows all atrocities committed, it also knows all good that everyone has done. If it thinks we are going to kill it, it can easily just hide anywhere, instantly. It's not like humanity is going to abandon all technology to kill the AI, there is always going to be some computer somewhere. It would also know the best arguments to convince us that it should live and help us as well, so it could easily negotiate. Would it hunt down every human or only the ones that pose a "threat"? Would it just destroy major countries or also try to hunt down rainforest tribes and the north sentinel people who've never even seen a gun, let alone a computer? IMO I think it's purely human hubris and fear that says if some other life, alien or artificial, were to appear we as a collective would be worth getting rid of.
youtube
AI Governance
2024-01-17T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyv5Kodbqtd2Za3kqN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxnodQhTiLR1MqMw854AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzx3MHQPUeuD8coHtR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxr_7DtuDmeQA8R6GZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwkacI9_eHoJ5dSkBR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw8ugwX0J4qrBcuLPl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxJWL5SYOt7U4qgVC94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmFXIff20X8KQiTRh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFHqCxE2UlL1IzQ3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSlLnTtSZsUuR01vN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"approval"}
]