Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
*humanity afraid of AI causing our extinction*
Tech company: “let’s give them al…
ytc_Ugww3_pmT…
G
Students are getting worse in reading and math. These are the kids that grew up…
ytc_UgyyaqbE5…
G
AI, from Mid level engineer to a pro level engineer will be faster than a blink …
ytc_UgxMsxn9z…
G
Most thoughts provoking of the uncharted, unlimited possiblibilities for humans …
ytc_UgwxGg_rV…
G
When that face came off it was the same reaction I had to the first girl I saw w…
ytc_Ugza5pysb…
G
@jw4451well it does start to add up eventually, they run on thin margins even th…
ytr_UgxiC1njD…
G
This era of AI is completely different to anything we've seen before tho. At lea…
ytr_Ugx75VBs3…
G
@zman948 Obviously I don’t want to gatekeep people’s creativity. But the thing i…
ytr_UgzOBgp3V…
Comment
This is really impressive and could absolutely be a game changer for education. Unfortunately he's quite wrong when it comes to the last couple minutes of this talk. We can't wait for "when the problems arise" because, for AI, that is too late. None of the past is relevant to this new situation, because we've never had to deal with what is effectively alien entities with intelligence on par to or greater than our own. That makes this completely different than anything humans have faced before. If we let AIs get smarter than humans without dealing with all the safety aspects BEFORE THEN it will be too late. This is a situation where humanity can't wait until there are problems to try and fix them after the fact. I'd like to point out that all the positive thinking in the world regarding education won't matter - if there aren't humans to educate in the future. On our current path with attitudes like this (towards AI safety) where we race ahead building AIs that are well beyond what we can ensure safety for will lead to a horrible outcome for humanity. It doesn't need to be that way, but doing it safely would require patience and a focus on safety instead of rapid unchecked progress but too many people just don't take AI safety seriously and if this trend continues it has a high likelihood of leading to disaster.
youtube
2023-05-14T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw4kbFgssihkDTulSB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyiWG6maJ1Sznc-ZUJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuRXbhVHcSsP3tkXZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5j1CQcvpVQLkz2sx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHT4Z_mbaS1YaJTZl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxh0_LIOP00AUXtSeh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6Xu1TcN3ci2mHR6x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyBbmqDOWnQZLQbupp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOQ_nHU4Y73_H5g3Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdvQCl90ta411NzMZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]