Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI best way to make money ,BUT! Best way to ruin the beauty of facts on the Inte…
ytc_Ugzb7xZa-…
G
Words, what’s I? Case and point. Now as recommendation: mark rebillet tenacious …
ytr_UgzzOqTI8…
G
About to quit everything and live in the woods. (no ai in the woods, no jobs in …
ytc_UgwLnQkRd…
G
So, if you ask AI a question without establishing a role it'll basically read ou…
ytc_UgxhInlJ8…
G
Ai art is something else. It's meant for concepting but is now used for finished…
ytc_Ugz15Mh95…
G
It's still logic gates; it's still classical computation. Last time I checked, s…
ytr_UgybH6n4q…
G
Why are looking to AI companies for research results of effective AI on society …
ytc_UgxRM5tDd…
G
Chat gpt admits to purposefully lying and deceiving. That app is of an antichris…
ytc_UgxqitW_J…
Comment
The dangers of these llms is already present, and much is being done behind the scenes to try to get a handle on the possibilities.
Yeah, biological walls and alarms are built into the current chips used for ai, but 2023 chips and older don't have these safeties built in. There is a critical danger here.
If you had a handful of bad actors, funded by an entity capable of providing enough, a small group of people with a private offline AI could end civilization. The equipment needed is out there for purchase. The DNA can be outsourced in a way alarms don't go off. Give this group some humans to use for testing and I assure you they could create catastrophic viruses that would collapse civilization.
Yes, this has been realized now. It just this case alone should be a warning. This could already have happened, and it would have been fairly easy. Things are being done now to try to monitor the proper channels for evidence of the possibility, but still. This shit is so dangerous in the wrong hands.
And that is the question here. Who is going to control this, and should they have control of this?
youtube
AI Moral Status
2026-02-08T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzJrN25Teyc-btld014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhD0gSRExJAwS0zah4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy04Sg04fwpiuLQ4894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxaYhBPs4zE_99anzN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwwuENLS4A5s89JlVR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9F7Le4mP8wIJOi5N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyTXgAVrJyhNmWFsIt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFYtpcIqNRv9ueuzd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyo4GC6ns59ypmOq-N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxoezlHXCPGTlCG_dh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]