Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Try VHEER Ai, try Hanyuan Ai.. watch me recent videos for more details on these…
ytr_UgzXT0OrG…
G
Wtf terran op! On a serious note, first 2 years of starcraft 2 terran had the hi…
rdc_cjoxpqv
G
@kindred6453 I think you would be wrong, but that is just me. An even more radic…
ytr_Ugy8x0jHN…
G
I don't think she was saying that the AI would create the premise. She was sayi…
ytc_UgwPAMEXE…
G
Yes! This is what I've been commenting to people. Tech billionaires own the gov…
ytr_UgyLwc4wT…
G
Just another example of why AI is the absolute worst. We just need to go back to…
ytc_UgyZ-JdgB…
G
AI is nothing similar to real art. You want to know the difference? Hours of str…
ytc_UgyOoPw-7…
G
"AI" doesn't even exist. We've had deep laerning models for a long long time, it…
ytr_Ugz-Tchps…
Comment
I'm taken aback by how weak a showing LeCun has made. I am on his side but the arguments he made are not at all helpful. AI will certainly be weaponized, the question is how effective will the countermeasures be and what the destructive yield will be. My own view is that AI will function as basic infrastructure far more ubiquitous than human labor is today. Of all the AI that will exist many will be super-intelligent and many humans empowered by AI will themselves be akin to superintelligences. In this setting is higher intelligence an asymmetric advantage? no, intelligence is generic. Might an intelligent someone or something discover a blackball technology? Yes but that is no different from the scenario we exist in today.
youtube
AI Governance
2023-06-26T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwH-6hm87UtoueFPWt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxpdou8J-Mw29x-Zrd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxSn61F8CnsZATGdjd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx-fWVIjvGigcWWvcx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxRNSUq3g4j9m2Xu7t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz6VJdTx_854kKoTah4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwfDe1MsjPlNh2yMkZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwzbk-4P9eZqRv4nad4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzhWFRsnJNk4XwOKl54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgwpXS7IEJKGUTfTjjZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]