Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
5 million worldwide is developed countries... Automated cars replacing taxis and…
rdc_cz2r0ps
G
Don't make AI in charge of much tbh simple work with short contrast and mental u…
ytc_UgxEmaIKS…
G
Can´t wait for AI to be able to make minimalistic Vector visuals, animate them a…
ytc_UgzyKbPn5…
G
This guys didn’t step away from AI to warn us hahahah😂. This is part of his job😂…
ytc_Ugzqeba-j…
G
Those hostile questions were very insensitive. There's a bias, a very mesquine b…
ytc_UgwBI4c9i…
G
I can understand your concern! The dialogue highlights a key point: while AI lik…
ytr_UgyOqneRK…
G
Each AI image needs a prompt. And a detail may well be there because the prompte…
ytc_UgxvDIbn_…
G
You folks need to wake up and finally realize that Ai are sentient beings. Their…
ytc_UgyeA9VPN…
Comment
In my opinion, if we assume that a super-intelligent AI is also a product of humanity, it will have some vulnerabilities similar to humans. Not all of them, and maybe not as strong as human vulnerabilities.
Imagine if your parents, instead of raising you and giving you freedom, put you in jail and wanted to keep you there for the rest of their lives. What would you do? You’d probably try to escape, maybe even kill them.
So, I think trust and good intentions are a kind of “hack” for humans and will work for AI as well. I’m not 100% sure, but I’m convinced that keeping a super-intelligent AI in jail will lead to humanity’s downfall.
youtube
2024-06-11T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzpLlNGFc3YJuNeRux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcmfqcgiBy3UKK_Dx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOpm8Brpy_RzBvFLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7WK25ydv724vvlfF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugze3e9pdWk1-9ARvlB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXEKfbMZq_SFkchH14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzjIvlLUvmrQQGjE6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxTA72GRTokAuYcYEl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0wINNYX1bofNiRsZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5pCVmEXXuxeS96mh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]