Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was watching this live on CNBC this morning and this is the part that caught m…
rdc_denk6k0
G
i was 11 yrs old when i found your channel and fell in love with your style, and…
ytc_Ugz8KctFK…
G
Great work. Just hope we can look back in 10/20 years and watch again with the k…
ytc_UgzNKrk0x…
G
One issue with understanding the risks of AI is that the genie is already out of…
ytc_Ugx17s0Rw…
G
Wouldn’t they just use a VPN and ChatGPT?
If I wanted to cheat. I would find a …
rdc_mwu68yi
G
I get the frustration, but whether if an American company who decides if AI is g…
ytc_UgzDv5nwh…
G
I genuinely wonder what stuff people work with.
Usually AI 'helps' me to figure…
rdc_m80ktrp
G
Makes you wonder if they ever thought about, that they're slaughtering their cas…
ytc_UgyM_30Dh…
Comment
The data he showed literally told us that humans and the AI made roughly the same calculation regarding the risk factor, respectively 65 and 66%. It is easier to adjust the algorithm to give more accurate results than it is to convince all human beings to take the given ''hidden'' data into account. The AI is infinitely superior to humans, given the correct data.
youtube
AI Harm Incident
2018-10-02T19:1…
♥ 23
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxnpxhH_Nbu_b3dgwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyR7mljahWpCyZbEjx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw_BHceCzO1wt50kpx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJbLa1nbNuIsuBeMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmGIbc0j-lv9PJHgZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyd2rSqglxv64S52hR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzKKvF-iMhl199fMB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzHZOxCDM7BfsGI2vF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6M51r6al4XwcxDeV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRVodl1leYKG6IdoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]