Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its not like that, it will never be like that it doesnt matter how advanced is t…
ytr_UgwFMP6X5…
G
He was horribly depressed and had a gun. If you've ever felt suicidal, you know …
ytc_UgwJfu9fM…
G
That very well could be, they were pretty silent on what the operational cost of…
rdc_n7oj149
G
Trump’s Big Beautiful Bill has a hidden gem in it. A 10 year ban on AI regulatio…
ytc_UgxHPM924…
G
It feels like you can’t decide whether the video is about AI in general or just …
ytc_Ugx571oPb…
G
damn, if i had this school in my elementary and middle school days i might have …
ytc_UgyVPMB-r…
G
Absolute bullshit. AI is not as accurate as humans. The travel industry is in sh…
ytc_UgxZDT9us…
G
Humanity will get this wrong in the same way it always does the benefits will be…
ytc_Ugxh9NKsF…
Comment
The most secure human-built systems in the world can still be hacked by humans. AI will think THOUSANDS of times faster than a human and have access to all of our knowledge simultaneously, how hard do these "experts" really think it will be for AI to hack its way out of whatever firewall-type protection they put in place to ensure its obedience? We are going to look like ants to AI. Another point - AI systems will reportedly not have genuine human empathy. This is inherently concerning, I'd like to know if any of these AI researchers have knowledge of psychology - in particular an understanding of sociopathic and narcissistic traits in humans that lack empathy, and what happens when those kinds of humans are in power.
youtube
AI Governance
2025-08-31T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugy2nkrV65ji54gkJrl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwh9g7K1dRIhhEQ5dx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwhEqWppBeIDqP4Ca94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgythXD4TVKF-q6NEg14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxVzsx05JFvKyRRmSF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugz_wKc2Pd_-NWWSFSd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymAA_2jI1RQSmbY_F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzwKBtxwkNicGQQwAl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9CWUyFYVP1da50qB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwQwWGNjOgMaGOTbDd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]