Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey guys, I wanted to pop into the comment section to seek help of sorts. I writ…
ytc_UgzhR7idu…
G
So basically AI is a computerized Psychopath with access to all your data and th…
ytc_Ugy8UmI_n…
G
Can we call them ai idiots instead? Calling them "artist" is just a straight up …
ytc_UgxqkEPH1…
G
AI is nothing more than a scare tactic used by the elite globalist to frighten a…
ytc_Ugzh8hrN7…
G
Most people that order packages and/or food want it delivered to their door. How…
ytc_UgzzIsM4J…
G
Nearly all AI roll-outs so far have been by grifter consultants, and CEOs trying…
ytc_UgxNmhsmE…
G
This kind of generative AI is okay imo. Its all done locally on the studios hard…
ytc_Ugx1zGy22…
G
If " we " build it. We being north Korea, Russia, China, Iran and us. Going to b…
ytc_UgwpwBOWO…
Comment
For the ai to be dangerous in itself It doesn't need "consciousness" , just being better at general problem solving(which maybe requires "consciousness" but its difficult to tell without knowing what consciousness is ).
Since badly aligned AGI is more likely to be an existencial risk ,and there is not a lot of money and resources on ensuring it will be safe ,I think Its as munch , or maybe more important to talk about , even if its more difficult to predict(and it being more difficult to predict makes things more alarming not less).
killer AGI bots can start a war and kill millions or even billions of people , but they probably aren't going to destroy the world(and if the world ends up in a nuclear war it mostlikely would have happened without them anyway) .
Even whith nuclear war the human species can survive, especially if we can colonize mars by then.
this is not the case if we make AGI and we don't align it whith our preferences correctly,
youtube
2018-04-04T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwAVOlBOYPdgvUieYp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw_xOqKVmTHf7jLsYt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpK64q2_7_JOMZKpx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxzTxtU54j9cESInjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzcOxYI7sFR_y4Ej0F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzv0TpgMPYmGiGU0Sp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzsdlmX4Rwu3aOg-m54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0KWaBcbmDd6lLKp14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugxd3TYVS15lQB10P9F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFgi8SXdWzJdGhfaN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}
]