Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It could be a human sabotage 😂😅 anti A.I. attack, to keep human ITs relevant 😂😅…
ytc_Ugy71L61o…
G
This automated handwriting was *very* distracting for me. Those stupid hands whi…
rdc_dgai051
G
Ultimately the 'training' process results in a hard wired table of numbers that …
ytc_UgyuWWL3X…
G
Yeah, don't we also have stealth autonomous bombers? And doesn't the Air Force c…
rdc_ic1wczg
G
And def, if I feel AI replacing a chef's work or a waitress job, it has to be ch…
ytr_UgyzbPxNH…
G
@The_Guy_That_Asked. While your point IS valid, you could have atleast phrased i…
ytr_UgyK7K4zU…
G
All models are wrong, but some are useful. - George Box 1976
All LLM outputs are…
ytc_UgxOyOUos…
G
Heed his words, but don't cower, for we have only truly lost the moment we give …
ytc_UgxMre2yP…
Comment
Yann and Melanie are showing they possess no humility in admitting they dont know enough to dismiss the x-risk. and, making facile comments like “we will not make it if it is harmful”, “intelligence is intrinsically good”, “killing 1% is not x-risk so we should ingore ai risk”, “im not paid enough to do this”, “we will figure it out when it happens”, “chatgpt did not deceive anyone because it is not alive”. Immense respect to Yoshua and Max for bearing through this. It was painful to see Melanie raise her voice at Yoshua when he was calm throughout the debate. My respect for Yoshua has further increased. Max was great in pointing out the evasiveness of the other side in giving any hint of a solution. It is clear which side won.
youtube
AI Governance
2023-06-29T09:3…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgygMgVkzkFhdxvX1fl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxDVyN_Rco7Hyi-1WF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwh1aieNazWh_qXIWV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztstnJ3W0Eb8J7T2J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4RFHoXmDVTaLtXsV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw2Kl3lzOW6cjtN1C54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugye1szYgcGHzRxdhxZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxaeZFTxfNjvJAF0Ox4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzBzLpqspS5DbUlgzd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzpQio1URUN7qB0WgZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]