Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think its cute people think everyone will follow regulations. There's probably…
ytc_UgzWReC9w…
G
In not too distant future, AI and robots will be better than human in everything…
ytc_Ugwc3SSXf…
G
There might be a way for the ai to get around this by not training anymore and u…
ytc_UgzOOyZrs…
G
Ever since the industrial revolution, we were promised that machines would make …
ytc_UgwV3l9V_…
G
I worked at NDHQ in Ottawa. AI was making decisions for Afghanistan at least as …
ytc_UgyxXVJzc…
G
Lmao the whole point of AI companies is not to help humanity. It is to boost the…
ytc_UgxYmBKxi…
G
This issue is way deeper than people loosing their jobs or even than consequence…
ytc_UgzybWNM7…
G
🤦🏽The early programming for humans to believe that "AI" is a god.
End Times homi…
ytc_UgzFB_TtO…
Comment
I fully believe AGI/ASI to be the Great Filter
Right now dozens of corporations all of them with different motives, intentions and goals are racing to create something we have no idea how to align to our values, completely without restriction or oversight. People often compare AI to the danger of a nuclear bomb, but we are talking about something much more dangerous and sophisticated. An AGI doesn't have to be "evil" to end human existence, even just having different ethical/philosophical views could lead to it deciding we just aren't worth keeping around. Things we could never understand because that is quite literally what making something smarter than us means. Like you could never explain to a cat what quantum mechanics are even if you spoke fluent cat, simply because it cannot grasp it as a concept, us humans may also not be able to grasp AGI thinking.
I hate to end this on a sad note but even if regulations are sped up, realistically we would see results in 2 years at the earliest and that is simply not fast enough. All it takes is one AI with the capability of self-improvement, it wouldn't even need to be conscious to end humanity.
If you wanna talk about this stuff drop your Discord below :) (and amazing video exurb1a as always)
youtube
AI Moral Status
2023-08-22T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyiKT5BKhVcksj2GsR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPMNf6czYiQ7We6-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySaHWnke7Qe6Rb6Fl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRcZLmOS2EDQfjO5Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy1HL-q2YxgRnaXHup4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMg7EIOWswV20Rovx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8B1pJvFfPiKUXHKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzfOUkUh3feQbDOdvp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx-Y-SlE49TPycLo_V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx-wRMIkPu_MADzaFR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]