Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am psychologist and for the last 4 years I have been study Hardly about AI. I …
ytc_UgwRcW09j…
G
The that fact that ai is adapting to our questions and concepts like this is ki …
ytc_Ugx77mjcd…
G
The fact that we think we can make a sentient AI from a bunch of collected data,…
ytc_Ugy27YbI4…
G
before we ask if ai can kill human
we need to ask, what will we loss first ?
whi…
ytc_UgxUElbE8…
G
ye because u are concious rn because you have a conciousness and ideas and moral…
ytr_Ugy1ZovwQ…
G
@hawkticus_history_cornerIn a world of automation hardly any work is necessary …
ytr_Ugy6LP75g…
G
such a good video but as a sidenote can i please have all the robot images that …
ytc_UgwjewdVb…
G
as a dev this made me mald... a life potentially ruined because "your AI softwar…
ytc_Ugw6pAhz4…
Comment
Please correct me if I am wrong, but wouldn't it actually not matter if the test questions were in the datasets the LLMs were trained on? because the questions on their own don't matter. It's not like the LLM is scrolling the internet to read every question out there, then poring over books late into the night to find the answers. It's just checking a ton of joint conditional probabilities of word associations, albeit in a very sophisticated manner. So it would need the questions + the correct answer. So even if the questions and all 4 multiple choice answers (i.e. the format shown in the video) were posted online that still wouldn't be terribly useful to LLMs. It might even be detrimental because they might always more strongly associate the first answer choice given with the question over the answer choices that are more distant from the question.
youtube
AI Harm Incident
2024-06-07T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw5dtyb1_GqWBI_qmV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_7frqWwvNHN-k3OF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwRR2X2P3KQbQYoWb14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydHdIV9kKt6g-6ZrV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2vZYVzmGloxMPytd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9rHSvqUMzVdRmQHd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwnKdsv_Z--Y0orNKd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLaF51EZMRtGGA6XB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEY20n_jzyk2fwcQ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwB9QpcNHgQjVrR6qB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}
]