Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI needing maintenance, can blackmail humans to keep up with maintenance or it w…
ytc_Ugxxgu3Zo…
G
I did my own version of this, with some tweaks, AI admitted that gangstalking is…
ytc_Ugz0rY3yy…
G
Unfortunately do you have any options to prevent such face change. AI goes beyon…
ytc_UgxHGZ1Bk…
G
If you aren't spending the same amount of time with your AI tools as you would w…
ytc_Ugwr1kAYd…
G
This guy is going to argue this A.I. all the way into consciousness. You'll know…
ytc_UgyTGDlZj…
G
@jordanthecommander6977 it aint that deep bro. the Data implies that AI art has …
ytr_UgwwUn0dC…
G
@abram730 So what you're saying is that the AI itself is the artist, right? If t…
ytr_Ugy5e044w…
G
2:22 the difference between an artist taking inspiration and ai taking "inspirat…
ytc_UgzXqBcaZ…
Comment
Very good topic. I actually have a good understanding of AI and certain types of ai are much more dangerous than others. LLM for instance are completely harmless on their own as they are essentially just a very complex mathematical mapping of text to text. However, they can be used as the engine or backbone of a complex system that could then be dangerous. The type of AI that is much more inherently dangerous is reinforcement learning. The example of the AI killing its controller in this video is just the true kind of danger that these systems could entail. It is similar to our capitalist society where the almighty dollar is the bottom line and being humane takes a second seat. It is simply a by product of their design, an intrinsic evil. Creating and AI powerful and intelligent enough to redefine our civilization while also ensuring that it is benign is going to be our ultimate trial that will either be our savior or our demise.
youtube
AI Governance
2023-07-07T22:4…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxxZNHATvwmscJuBaN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxC83aCbORFAo79aVF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyKboeUahQcdO9R96t4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3Ka5H3iqcolHfJEl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-PiXnuTIXOjbKx114AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzrk3dieff65oM43l14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwC5gA0mEu2LD_kkLV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzyagaM5JEKP0XizYR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjeMYX2FBWs_T50F94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTuPamZWPKuprUFbB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]