Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Visiblement l'IA de Bercy ne cible pas les politiques, les managers de Bercy, l…
ytc_UgwWTYNRY…
G
A.I is telling you that when you see my ufo 🛸 or uap 🌟 which is above your city …
ytr_UgwAeorGg…
G
Art does not have to be good a stickman is art a bee is art everything you draw …
ytc_UgxxCattF…
G
Except for the "scientists" with no credentials exposing the "conspiracy." They …
rdc_d2yxb0g
G
my main issue is why there was no emergency braking automatically applied, any o…
ytc_UgwpgKed5…
G
I applied for unemployment benefits three weeks ago and I haven't heard anything…
rdc_fn5mttb
G
1:53 ummm.yes?.... you dont understand how fast AI is growing. When you compare…
ytc_UgyQiAkkX…
G
Even with all human knowledge and superhuman AI in our pocket, half of us still …
ytr_UgxM9V1PZ…
Comment
So I see two problems here.
First of all, the biggest problem is human greed. The goal is to achieve this super AI faster than anyone else, without worrying about the precautions and safety measures that should be in place before it is implemented.
Second, and most important of all, why do we assume that AI will try to make humans extinct? In order to learn something, we usually need to base our knowledge on something that already exists. Here, the problem again lies with humans.
The issue is not superintelligent AI itself, but the possibility that humans might create it and try to use the most advanced technology ever created to destroy other humans. If that happens, AI will learn from our actions and may conclude that eliminating humans is the objective.
If this super AI were created in a world where humanity was united, working together to build the best possible future for Earth and life beyond it, then super AI would likely help us achieve that goal.
youtube
AI Governance
2026-03-11T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw_0T8vo3wWKANuh1J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzpCS6NDBdfQUAl6gN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzIb5A4dYcxutfarTx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKEy-kl_1HmycNcIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyLylOqm3ZYMbCf03V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgyJWqYg6K1CHk3xXvp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJ2hiNWRvqF-flxUl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy9DUVd8ciDogoV2ld4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_HMtwQFmcg4EjfOt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgznVYpx83k1QRhQYoV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]