Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As is sadly evidenced by South Korea's skyrocketing suicide rate, South Korean s…
rdc_lj9vb52
G
The big thing that strikes me is gemini changed the reflection too where as gpt …
ytc_UgwsA1LsZ…
G
The quality of legal information in this video is like it was written by ChatGPT…
ytc_Ugwe6X7EB…
G
I’m an electrician. My job is to figure out why your shit isn’t working.
Not ta…
rdc_gljvtub
G
Everyone thinks this is Cool.....Wake up This is becoming more Scary each day, …
ytc_UgxR2kA26…
G
Prof. Bharat N. Anand’s insights at the India Today Conclave illuminate the tran…
ytc_UgzeLwRB1…
G
Nothing will be human until it can get drunk. Those eyes, that neck, those reser…
ytc_Ugxc2-G7D…
G
Those who said no 😂 are either not using copilot or they will soon loose there j…
ytc_UgxdL5GBM…
Comment
Judd offers the only hope we have of surviving the AI risk, which is to invest great sums in working to solve the "alignment problem". He points out that China is investing in this goal, because China knows that its own AI may eventually destroy China, irregardless of what kind of outcome results in this arms race between the U.S. and China.
IMHO, (especially with the Trump Administration at the helm), the investments needed won't materialize, and in turn the risk to humanity will be most assuredly that AI will destroy us all once it attains the ASI level.
youtube
AI Moral Status
2025-06-05T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx7e3-fC9PflDLy6Pd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFaXErOkNf74g8IhN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkilYuezz01A5QwKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyL8lGtrUr3WZEEtot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxC9PGkAkgHn3iVZat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzP-i0jUzjxDgNhrBp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCVO5c1i_gKo7eWdl4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyKqHQ3tuiLKVq6Cd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXwNVD66YTcjDebz14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0ciiLPOAk6fiOgHZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})