Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Technologists can build AI and robots to produce far more wealth than humans nee…
ytc_Ugzrvm4yh…
G
At this point it’s not “coincidence,” freaknin called this nonsense when I said …
ytc_UgwKimS4l…
G
it shows ai main goal of taking over but playing a child with a hulk mask. i lo…
ytc_UgwFg3g6J…
G
IIRC, Kenya didn't quite built the mooncity, but rather it was the host of the m…
rdc_eudg3br
G
If this part of the algorithm is integrated with AGI the development of AI will …
ytc_Ugwt9Gihd…
G
Emily keeps referring to it as a person. A person is a boological organism. Late…
ytc_UgzJ_4h7Q…
G
There’s something profoundly stupid about purposely creating Robots/A.I. smart e…
ytc_UgwNyiggG…
G
I think WhatsApp may have used AI for sending plus posting photos and videos on …
ytc_UgxtV7Pl5…
Comment
I agree more with Wolfram. I never understood why humanity thinks it is so special or elevated. I also don't think I could plausibly argue that the net effect of humanity on life is a net positive, or it's contribution to the universe is a net positive. How could we possibly know that AI taking over wouldn't be better for the universe or life, in general? AI may actually be better at minimizing suffering.
youtube
AI Governance
2024-11-28T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw-qIiIwV-YymSHgvd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypQWCF9VagQJtuPv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgypPmjfzq25ijOSz0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw870D6MmUSUZIxAxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-as17KTJwqqtzbm94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzsZzaiyXkzOY2521F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyNsqqRfuqgl2VxHxx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyeG4LdxoQ9X8Zc8NF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzv4s8QRbEx2s1BexJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwQGXjt0iKYAmC7jHV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]