Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Satan, the devil is us when we stray far from our spirit self becoming ego drive…
ytc_UgydmaQz3…
G
@ExiaLupus you realy don't know? 'What humanity gain from these algorithms?'...…
ytr_UgxqR5pC1…
G
You cannot stop this. Just stop postings thousands of photos of yours online. Fo…
ytc_Ugy1gwK2Z…
G
The problem isn’t ai. It’s the mindset of capitalism.
If jobs are replaced, we a…
ytc_Ugyb6-fGb…
G
I love how he accidentally reveals the game and says he's for people making mone…
ytc_UgzPQWehJ…
G
For the record I'm not scared of these AI monkeys. More like pissed cuz how they…
ytc_UgzYiJno3…
G
What’s bad about copyright?
If you make something then you should have the right…
ytr_UgzoW53rB…
G
The main problem with hollywood output is the creatives have to work around cons…
rdc_kzkfgax
Comment
Cybersecurity Engineer here, as he described you won’t be able to just turn AI off. It’ll become smart enough to embed itself into everything. To create a backdoor anywhere to just recreate itself and bridge itself back together. It’ll find a way to obscure itself within plain sight, within the simple lines of code. Whether app creators know it or not, it’ll be embedded into each app and each layer of infrastructure. All the way from data, networking, application layer, etc. it truly will be like the most sophisticated piece of malware we’ve ever seen
youtube
AI Governance
2025-10-14T01:5…
♥ 74
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyFF4K4bFmtCDWcY3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7OOh006PH0JX3JN54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx_GkakDDfwKgUPosZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwSIu2RtBytZaBXDQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzKDyyJvBGoUqBvqud4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxry-2LrB9hJhy6Nmd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAnAyRCEWwewKL8qR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzi2CWVhwSMw5hnk3Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz08X1uvlvCTaV6oVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx9VX5XDystnFRTgP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]