Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A survey of AIs found they believe AI should have final decision making powers. …
rdc_gd8ckke
G
I believe her, the synagogue of Satan are weaponizing AI to bring in their Antic…
ytc_Ugw7zsezW…
G
This is similar to the only way I used AI in college: to get book recommendation…
ytr_Ugw0qrFIa…
G
Bud.... C. Ai isnt the freakiest...
chai is. Go look into it...
C. Ai has a fi…
ytc_UgwXUDF-P…
G
All of this is a good step to make AI benefit humans. However, we can not miss t…
ytc_UgxmDro5V…
G
I deactivated my account of over 10 years, and wrote them "fuck you and your NFT…
ytc_UgxftVeqz…
G
lol! None of this is headed to a good place. I was sure cancer would get me even…
ytc_UgynVwUBz…
G
Ai artist are missing the point, Deltarune tomorrow is the way to go, the good L…
ytc_UgxC9Ft_c…
Comment
Sick video, this was very impressive and my favorite so far. I also did my final science project today on large language models and back propagation. Geoffrey Hinton, who left his job at google a few weeks ago, only accidentally discovered the LLM for ChatGPT because he was trying (and still is) to understand human brains. He says now, that back propagation is superior to how humans think. We use about 100 trillion neural networks, while GPT uses around a trillion, but stores MUCH more data. It's superior, and that's what scares him. And it should. And more than that, he was never into artificial intelligence. He only wanted to help people. Theres an AI gold rush going on. Billions upon billions of dollars with no regard to safety into this industry. Because it pays. Hell, I use it. It tutors me in school. Helps me with math problems. But am i concerned? Oh hell yeah! I'm with Geoffrey.
youtube
AI Governance
2023-05-10T08:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy5W7oisEU2g6uQ4Ex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2dxwOxaZaJgC0yxV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyjwCX_MZFC-QeMRU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2RnIfBAaRdaynLg94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzbTRrx4juR3bFWsXN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxMUD5gxj3f9bm-00V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmYW6wbcu37JIPNBd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzfscducYP438suPWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGyKvjOa0W2Up_eud4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyaMrNbmh_znGqtsM54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}
]