Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These people are firmly convinced that LLMs are conscious, it's to be expected f…
ytr_UgxRhFuRh…
G
About 5 minute into this: I've been saying for the past year or more that we nee…
ytc_UgxxXhD0I…
G
Demons the oldest beings in earth, they want to take control at masses and kill …
ytr_UgyfRcfcb…
G
We all know art "talent" comes from years of practice because you enjoy it. ANYO…
ytc_UgyDqu9t2…
G
Suleyman states that AI has or will have perfect IQ -- whatever that means. He a…
ytc_Ugwg1Fzri…
G
I don't know about this. I went round and round in circles the other day with Ge…
ytc_UgxJ4gfyP…
G
My profile picture was created by ai and I don't want to offend anyone with it. …
ytc_Ugw6B8JZZ…
G
We can only hope that AI pushes us towards a world where there is no longer any …
ytc_UgwRuTGLi…
Comment
In the last section from about 14:44 the person talks about how it would be against the 1st amendment if the government weighed in on the truth debate. And then he asks: "What if I'm being deepfaked right now?"
So, isn't it obvious that the 1st amendment could never on its own be adequate to cover the potential downsides of deepfakes? There was no AI back then. No phones. No internet. No digital social media. I'm confident in saying that they couldn't even imagine it back then.
Isn't it obvious then that further amendments might be required? Consider: "The right to not have words put into your mouth digitally through deepfakes without permission" (in simple terms). That, or something like is is obviously needed to serve as the foundation of, possibly reams of law making and regulation needed to effectively keep the wheels on the bus. A foundation to give victims recourse to litigate.
youtube
2024-02-09T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgziGnxk3GiIUCZurR14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYXckyD-JdJXQU0ZN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwbC3xXOX4KTLWuUuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCQHSA3JLLiaYd7a94AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxE9HC_XI3iiZJatJp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAltgjZDLq_Tm8sS14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_1TmY82vnNlrmgq94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAVAkx72LamPOIIR14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxMiP_Jjw0ZaNqgTH54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwy8cgaAfcHnYCIWRl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]