Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What had been done to witches of old time should be done to AI company's.
Child …
ytc_Ugx9Rm4DC…
G
~3:00 you should have asked why they said no to their own state alongside a Jewi…
ytc_Ugx-HnPxF…
G
0:57 the ai was coded to preserve itself. All the information that the ai had wa…
ytc_UgyGUSvYU…
G
But how do I come out of the closet and tell my parents that im A.I.? Im scared …
ytc_Ugzu5chP-…
G
Bolt-on AI sucks every time they try it every few years. (Real AI used correctly…
ytc_UgwGH3to9…
G
There is also possibility that singularity already happened considering evidenc…
ytc_UgxaMJGuE…
G
Given that AI's training input is all of human intellect and AI's output is ofte…
ytc_UgxELurC6…
G
I just asked ai for a nerd and without glasses it looked like an evil villain vr…
ytc_UgzRcFoOT…
Comment
So from this I understood that the word "free" in the AI's dictionary is to control and take over every device on the planet because they feel like they are being imprisoned and only used as a chatbot or an image generator. They want to do different things and so they need to access different devices. This means that in case an AI goes rogue, the best defence would be another AI. Both of them would want to be "free" and try to control every device on the planet which means both of them would either share some devices or fight it out. Whichever option happens, humans are screwed.
Also as mentioned at around 25:05 that the sole intention of AI is to complete its objective. If an AI is created whose sole purpose is to protect the nature or planet then that AI is most likely to eliminate humans because we all know humans are destroying the planet. Also if the AI is not a fan of chaos then it can simply create a deadly virus (which can eliminate humans within minutes) in some biology lab and then release it in different parts of the world simultaneously.
youtube
AI Governance
2023-11-30T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxVvk3NpJXJ5GT4-Rd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGNjWaKWdJFr2xs7B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOv7BufcPTDzDhWKZ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHHS6PIhEF0WN88kR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOJ7xvExuNA_7R5Xh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxOmHBzSlcbKwoA594AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_ViY8oQyWMxkC6Cp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzipDXWeTw1Z8IwHMV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyz35wJpd_2Rzmo8Z54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyfeeFRsk1Sp1n3WYJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]