Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think we're now at a point where A.I. HAS to replace everyone, otherwise it is…
ytc_UgxVB3Dzq…
G
Most likely he was bullied or else why would such a young child do smth like tha…
ytc_Ugzixj2OX…
G
Classic - Ask training vendor (Like Multiverso CEO) - what is the problem with A…
ytc_UgxIYwHk2…
G
Ppl talk abt AI creating freetime.. what r most ppl doing in their freetime NOW …
ytc_Ugz2b5xsp…
G
If its real they shouldn't be shooting weapons.
A robot at best should be doin…
ytc_Ugy4QbDQT…
G
And why are the using a face filter? To make it look more real? Didn’t work for …
ytc_Ugz2crtRg…
G
SO BEAUTIFUL... SHE IS... ❤
BUT when he PEELED HER WHOLE FACE... 👀 Really SCARY.…
ytc_UgzGY3jTO…
G
Except the part where he falls in love with Scarlett Johansson. He doesn't get t…
rdc_jrzvete
Comment
To LeCun:
1. It is speculative to assert that an intelligent entity will inherently be benevolent towards other beings or those less intelligent, and will maintain the intentions instilled by its creators, without any empirical evidence to support it.
2. LeCun merely projected future possibilities of AI systems without proposing any viable solutions to current issues. His claims lack any form of theoretical proof or concept to substantiate them. How we gonna get there, to these 'object-following' systems? No answer whatsoever.
3. The presumption that intelligence is inherently good is misguided. Historical evidence suggests that many individuals, devoid of moral guidelines, were capable of heinous acts despite their intelligence. Intelligence does not prevent immoral actions or decisions.
youtube
AI Governance
2023-07-02T03:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwTjak25URYjjUSaHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYZxy_MuFCWovZcHp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPZK7sBFowZ7W59KZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzWoKTW1XVC3hRck-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzV3WZVP4EPhBf9YO94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMS6qzDPPh9v7z9V14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9QSOyAiqYzgFI_EB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyi_gkRyn5QFEohOMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhgbL9ssnNPSoPVXN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_bJsdj1oBgFlOo194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"})